00:00:00.001 Started by upstream project "autotest-per-patch" build number 132571 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.108 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.109 The recommended git tool is: git 00:00:00.109 using credential 00000000-0000-0000-0000-000000000002 00:00:00.111 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.170 Fetching changes from the remote Git repository 00:00:00.174 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.228 Using shallow fetch with depth 1 00:00:00.228 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.228 > git --version # timeout=10 00:00:00.278 > git --version # 'git version 2.39.2' 00:00:00.278 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.300 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.300 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.917 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.930 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.942 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.942 > git config core.sparsecheckout # timeout=10 00:00:04.953 > git read-tree -mu HEAD # timeout=10 00:00:04.968 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.989 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.989 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.078 [Pipeline] Start of Pipeline 00:00:05.091 [Pipeline] library 00:00:05.092 Loading library shm_lib@master 00:00:05.093 Library shm_lib@master is cached. Copying from home. 00:00:05.112 [Pipeline] node 00:00:05.121 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.123 [Pipeline] { 00:00:05.135 [Pipeline] catchError 00:00:05.137 [Pipeline] { 00:00:05.151 [Pipeline] wrap 00:00:05.159 [Pipeline] { 00:00:05.168 [Pipeline] stage 00:00:05.170 [Pipeline] { (Prologue) 00:00:05.441 [Pipeline] sh 00:00:05.724 + logger -p user.info -t JENKINS-CI 00:00:05.744 [Pipeline] echo 00:00:05.747 Node: WFP8 00:00:05.755 [Pipeline] sh 00:00:06.051 [Pipeline] setCustomBuildProperty 00:00:06.062 [Pipeline] echo 00:00:06.063 Cleanup processes 00:00:06.066 [Pipeline] sh 00:00:06.347 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.347 2175935 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.360 [Pipeline] sh 00:00:06.638 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.638 ++ grep -v 'sudo pgrep' 00:00:06.638 ++ awk '{print $1}' 00:00:06.638 + sudo kill -9 00:00:06.638 + true 00:00:06.652 [Pipeline] cleanWs 00:00:06.661 [WS-CLEANUP] Deleting project workspace... 00:00:06.661 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.666 [WS-CLEANUP] done 00:00:06.669 [Pipeline] setCustomBuildProperty 00:00:06.677 [Pipeline] sh 00:00:06.952 + sudo git config --global --replace-all safe.directory '*' 00:00:07.029 [Pipeline] httpRequest 00:00:07.418 [Pipeline] echo 00:00:07.419 Sorcerer 10.211.164.20 is alive 00:00:07.427 [Pipeline] retry 00:00:07.429 [Pipeline] { 00:00:07.440 [Pipeline] httpRequest 00:00:07.445 HttpMethod: GET 00:00:07.445 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.446 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.459 Response Code: HTTP/1.1 200 OK 00:00:07.460 Success: Status code 200 is in the accepted range: 200,404 00:00:07.460 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.791 [Pipeline] } 00:00:16.811 [Pipeline] // retry 00:00:16.820 [Pipeline] sh 00:00:17.109 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.124 [Pipeline] httpRequest 00:00:17.702 [Pipeline] echo 00:00:17.704 Sorcerer 10.211.164.20 is alive 00:00:17.714 [Pipeline] retry 00:00:17.716 [Pipeline] { 00:00:17.733 [Pipeline] httpRequest 00:00:17.738 HttpMethod: GET 00:00:17.738 URL: http://10.211.164.20/packages/spdk_4c65c64060cde15b20b8c3be3816c9ca02ca55e7.tar.gz 00:00:17.739 Sending request to url: http://10.211.164.20/packages/spdk_4c65c64060cde15b20b8c3be3816c9ca02ca55e7.tar.gz 00:00:17.745 Response Code: HTTP/1.1 200 OK 00:00:17.745 Success: Status code 200 is in the accepted range: 200,404 00:00:17.746 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4c65c64060cde15b20b8c3be3816c9ca02ca55e7.tar.gz 00:02:00.146 [Pipeline] } 00:02:00.167 [Pipeline] // retry 00:02:00.175 [Pipeline] sh 00:02:00.455 + tar --no-same-owner -xf spdk_4c65c64060cde15b20b8c3be3816c9ca02ca55e7.tar.gz 00:02:02.997 [Pipeline] sh 00:02:03.278 + git -C spdk log --oneline -n5 00:02:03.278 4c65c6406 lib/reduce: Fix an incorrect chunk map index 00:02:03.278 11441d6e7 lib/reduce: Don't need to persist the old chunk map 00:02:03.278 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:02:03.278 5592070b3 doc: update nvmf_tracing.md 00:02:03.278 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:02:03.288 [Pipeline] } 00:02:03.302 [Pipeline] // stage 00:02:03.311 [Pipeline] stage 00:02:03.314 [Pipeline] { (Prepare) 00:02:03.330 [Pipeline] writeFile 00:02:03.348 [Pipeline] sh 00:02:03.629 + logger -p user.info -t JENKINS-CI 00:02:03.641 [Pipeline] sh 00:02:03.920 + logger -p user.info -t JENKINS-CI 00:02:03.932 [Pipeline] sh 00:02:04.211 + cat autorun-spdk.conf 00:02:04.211 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.211 SPDK_TEST_NVMF=1 00:02:04.211 SPDK_TEST_NVME_CLI=1 00:02:04.211 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:04.211 SPDK_TEST_NVMF_NICS=e810 00:02:04.211 SPDK_TEST_VFIOUSER=1 00:02:04.211 SPDK_RUN_UBSAN=1 00:02:04.211 NET_TYPE=phy 00:02:04.218 RUN_NIGHTLY=0 00:02:04.222 [Pipeline] readFile 00:02:04.247 [Pipeline] withEnv 00:02:04.249 [Pipeline] { 00:02:04.261 [Pipeline] sh 00:02:04.550 + set -ex 00:02:04.550 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:04.550 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:04.550 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.550 ++ SPDK_TEST_NVMF=1 00:02:04.550 ++ SPDK_TEST_NVME_CLI=1 00:02:04.550 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:04.550 ++ SPDK_TEST_NVMF_NICS=e810 00:02:04.550 ++ SPDK_TEST_VFIOUSER=1 00:02:04.550 ++ SPDK_RUN_UBSAN=1 00:02:04.550 ++ NET_TYPE=phy 00:02:04.550 ++ RUN_NIGHTLY=0 00:02:04.550 + case $SPDK_TEST_NVMF_NICS in 00:02:04.550 + DRIVERS=ice 00:02:04.550 + [[ tcp == \r\d\m\a ]] 00:02:04.550 + [[ -n ice ]] 00:02:04.550 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:04.550 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:04.550 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:04.550 rmmod: ERROR: Module irdma is not currently loaded 00:02:04.550 rmmod: ERROR: Module i40iw is not currently loaded 00:02:04.550 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:04.550 + true 00:02:04.550 + for D in $DRIVERS 00:02:04.550 + sudo modprobe ice 00:02:04.550 + exit 0 00:02:04.560 [Pipeline] } 00:02:04.574 [Pipeline] // withEnv 00:02:04.579 [Pipeline] } 00:02:04.593 [Pipeline] // stage 00:02:04.603 [Pipeline] catchError 00:02:04.605 [Pipeline] { 00:02:04.618 [Pipeline] timeout 00:02:04.618 Timeout set to expire in 1 hr 0 min 00:02:04.620 [Pipeline] { 00:02:04.634 [Pipeline] stage 00:02:04.636 [Pipeline] { (Tests) 00:02:04.653 [Pipeline] sh 00:02:04.939 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:04.939 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:04.939 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:04.939 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:04.939 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:04.939 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:04.939 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:04.939 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:04.939 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:04.939 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:04.939 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:04.939 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:04.939 + source /etc/os-release 00:02:04.939 ++ NAME='Fedora Linux' 00:02:04.939 ++ VERSION='39 (Cloud Edition)' 00:02:04.939 ++ ID=fedora 00:02:04.939 ++ VERSION_ID=39 00:02:04.939 ++ VERSION_CODENAME= 00:02:04.939 ++ PLATFORM_ID=platform:f39 00:02:04.939 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:04.939 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:04.939 ++ LOGO=fedora-logo-icon 00:02:04.939 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:04.939 ++ HOME_URL=https://fedoraproject.org/ 00:02:04.939 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:04.939 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:04.939 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:04.939 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:04.939 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:04.939 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:04.939 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:04.939 ++ SUPPORT_END=2024-11-12 00:02:04.939 ++ VARIANT='Cloud Edition' 00:02:04.939 ++ VARIANT_ID=cloud 00:02:04.939 + uname -a 00:02:04.939 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:04.939 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:06.838 Hugepages 00:02:06.838 node hugesize free / total 00:02:06.838 node0 1048576kB 0 / 0 00:02:06.838 node0 2048kB 0 / 0 00:02:06.838 node1 1048576kB 0 / 0 00:02:06.838 node1 2048kB 0 / 0 00:02:06.838 00:02:06.838 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:06.838 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:06.838 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:06.838 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:06.838 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:06.838 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:06.838 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:06.838 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:06.838 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:06.838 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:06.838 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:06.838 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:06.838 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:06.838 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:06.838 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:07.096 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:07.096 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:07.096 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:07.096 + rm -f /tmp/spdk-ld-path 00:02:07.096 + source autorun-spdk.conf 00:02:07.096 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.096 ++ SPDK_TEST_NVMF=1 00:02:07.096 ++ SPDK_TEST_NVME_CLI=1 00:02:07.096 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.096 ++ SPDK_TEST_NVMF_NICS=e810 00:02:07.096 ++ SPDK_TEST_VFIOUSER=1 00:02:07.096 ++ SPDK_RUN_UBSAN=1 00:02:07.096 ++ NET_TYPE=phy 00:02:07.096 ++ RUN_NIGHTLY=0 00:02:07.096 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:07.096 + [[ -n '' ]] 00:02:07.096 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.096 + for M in /var/spdk/build-*-manifest.txt 00:02:07.096 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:07.096 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:07.096 + for M in /var/spdk/build-*-manifest.txt 00:02:07.096 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:07.096 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:07.096 + for M in /var/spdk/build-*-manifest.txt 00:02:07.096 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:07.096 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:07.096 ++ uname 00:02:07.096 + [[ Linux == \L\i\n\u\x ]] 00:02:07.096 + sudo dmesg -T 00:02:07.096 + sudo dmesg --clear 00:02:07.096 + dmesg_pid=2176940 00:02:07.096 + [[ Fedora Linux == FreeBSD ]] 00:02:07.096 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.096 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.096 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:07.096 + [[ -x /usr/src/fio-static/fio ]] 00:02:07.096 + export FIO_BIN=/usr/src/fio-static/fio 00:02:07.096 + FIO_BIN=/usr/src/fio-static/fio 00:02:07.096 + sudo dmesg -Tw 00:02:07.096 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:07.096 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:07.096 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:07.096 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.096 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.096 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:07.096 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.096 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.096 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:07.096 07:45:01 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:07.096 07:45:01 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:07.096 07:45:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.096 07:45:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:07.096 07:45:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:07.096 07:45:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.096 07:45:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:07.096 07:45:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:07.096 07:45:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:07.096 07:45:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:07.096 07:45:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:07.096 07:45:01 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:07.096 07:45:01 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:07.354 07:45:01 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:07.354 07:45:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:07.354 07:45:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:07.354 07:45:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:07.354 07:45:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:07.354 07:45:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:07.354 07:45:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.354 07:45:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.354 07:45:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.354 07:45:01 -- paths/export.sh@5 -- $ export PATH 00:02:07.354 07:45:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.354 07:45:01 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:07.354 07:45:01 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:07.354 07:45:01 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732689901.XXXXXX 00:02:07.354 07:45:01 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732689901.dBfuLI 00:02:07.354 07:45:01 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:07.354 07:45:01 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:07.354 07:45:01 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:07.354 07:45:01 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:07.354 07:45:01 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:07.354 07:45:01 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:07.354 07:45:01 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:07.354 07:45:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.355 07:45:01 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:07.355 07:45:01 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:07.355 07:45:01 -- pm/common@17 -- $ local monitor 00:02:07.355 07:45:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.355 07:45:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.355 07:45:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.355 07:45:01 -- pm/common@21 -- $ date +%s 00:02:07.355 07:45:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.355 07:45:01 -- pm/common@21 -- $ date +%s 00:02:07.355 07:45:01 -- pm/common@25 -- $ sleep 1 00:02:07.355 07:45:01 -- pm/common@21 -- $ date +%s 00:02:07.355 07:45:01 -- pm/common@21 -- $ date +%s 00:02:07.355 07:45:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732689901 00:02:07.355 07:45:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732689901 00:02:07.355 07:45:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732689901 00:02:07.355 07:45:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732689901 00:02:07.355 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732689901_collect-vmstat.pm.log 00:02:07.355 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732689901_collect-cpu-load.pm.log 00:02:07.355 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732689901_collect-cpu-temp.pm.log 00:02:07.355 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732689901_collect-bmc-pm.bmc.pm.log 00:02:08.289 07:45:02 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:08.289 07:45:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:08.289 07:45:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:08.289 07:45:02 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.289 07:45:02 -- spdk/autobuild.sh@16 -- $ date -u 00:02:08.289 Wed Nov 27 06:45:02 AM UTC 2024 00:02:08.289 07:45:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:08.289 v25.01-pre-273-g4c65c6406 00:02:08.289 07:45:02 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:08.289 07:45:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:08.289 07:45:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:08.289 07:45:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:08.289 07:45:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:08.289 07:45:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.289 ************************************ 00:02:08.289 START TEST ubsan 00:02:08.289 ************************************ 00:02:08.289 07:45:02 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:08.289 using ubsan 00:02:08.289 00:02:08.289 real 0m0.000s 00:02:08.289 user 0m0.000s 00:02:08.289 sys 0m0.000s 00:02:08.289 07:45:02 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:08.289 07:45:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:08.289 ************************************ 00:02:08.289 END TEST ubsan 00:02:08.289 ************************************ 00:02:08.289 07:45:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:08.289 07:45:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:08.289 07:45:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:08.289 07:45:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:08.289 07:45:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:08.289 07:45:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:08.289 07:45:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:08.289 07:45:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:08.289 07:45:02 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:08.548 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:08.548 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:08.806 Using 'verbs' RDMA provider 00:02:21.945 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:31.919 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:32.435 Creating mk/config.mk...done. 00:02:32.435 Creating mk/cc.flags.mk...done. 00:02:32.435 Type 'make' to build. 00:02:32.435 07:45:26 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:32.435 07:45:26 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:32.435 07:45:26 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:32.435 07:45:26 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.435 ************************************ 00:02:32.435 START TEST make 00:02:32.435 ************************************ 00:02:32.435 07:45:26 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:33.002 make[1]: Nothing to be done for 'all'. 00:02:34.382 The Meson build system 00:02:34.382 Version: 1.5.0 00:02:34.382 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:34.382 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:34.382 Build type: native build 00:02:34.382 Project name: libvfio-user 00:02:34.382 Project version: 0.0.1 00:02:34.382 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:34.382 C linker for the host machine: cc ld.bfd 2.40-14 00:02:34.382 Host machine cpu family: x86_64 00:02:34.382 Host machine cpu: x86_64 00:02:34.382 Run-time dependency threads found: YES 00:02:34.382 Library dl found: YES 00:02:34.382 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:34.382 Run-time dependency json-c found: YES 0.17 00:02:34.382 Run-time dependency cmocka found: YES 1.1.7 00:02:34.382 Program pytest-3 found: NO 00:02:34.382 Program flake8 found: NO 00:02:34.382 Program misspell-fixer found: NO 00:02:34.382 Program restructuredtext-lint found: NO 00:02:34.382 Program valgrind found: YES (/usr/bin/valgrind) 00:02:34.382 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:34.382 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:34.383 Compiler for C supports arguments -Wwrite-strings: YES 00:02:34.383 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:34.383 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:34.383 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:34.383 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:34.383 Build targets in project: 8 00:02:34.383 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:34.383 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:34.383 00:02:34.383 libvfio-user 0.0.1 00:02:34.383 00:02:34.383 User defined options 00:02:34.383 buildtype : debug 00:02:34.383 default_library: shared 00:02:34.383 libdir : /usr/local/lib 00:02:34.383 00:02:34.383 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.640 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:34.898 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:34.898 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:34.898 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:34.898 [4/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:34.898 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:34.898 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:34.898 [7/37] Compiling C object samples/null.p/null.c.o 00:02:34.898 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:34.898 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:34.898 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:34.898 [11/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:34.898 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:34.898 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:34.898 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:34.898 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:34.898 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:34.898 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:34.898 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:34.898 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:34.898 [20/37] Compiling C object samples/client.p/client.c.o 00:02:34.898 [21/37] Compiling C object samples/server.p/server.c.o 00:02:34.898 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:34.898 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:34.898 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:34.898 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:34.898 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:34.898 [27/37] Linking target samples/client 00:02:34.898 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:34.898 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:35.156 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:35.156 [31/37] Linking target test/unit_tests 00:02:35.156 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:35.156 [33/37] Linking target samples/null 00:02:35.156 [34/37] Linking target samples/server 00:02:35.156 [35/37] Linking target samples/shadow_ioeventfd_server 00:02:35.156 [36/37] Linking target samples/gpio-pci-idio-16 00:02:35.156 [37/37] Linking target samples/lspci 00:02:35.156 INFO: autodetecting backend as ninja 00:02:35.156 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:35.156 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:35.723 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:35.723 ninja: no work to do. 00:02:40.989 The Meson build system 00:02:40.989 Version: 1.5.0 00:02:40.989 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:40.989 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:40.989 Build type: native build 00:02:40.989 Program cat found: YES (/usr/bin/cat) 00:02:40.989 Project name: DPDK 00:02:40.989 Project version: 24.03.0 00:02:40.989 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:40.989 C linker for the host machine: cc ld.bfd 2.40-14 00:02:40.989 Host machine cpu family: x86_64 00:02:40.989 Host machine cpu: x86_64 00:02:40.989 Message: ## Building in Developer Mode ## 00:02:40.989 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:40.989 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:40.989 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:40.989 Program python3 found: YES (/usr/bin/python3) 00:02:40.989 Program cat found: YES (/usr/bin/cat) 00:02:40.989 Compiler for C supports arguments -march=native: YES 00:02:40.989 Checking for size of "void *" : 8 00:02:40.989 Checking for size of "void *" : 8 (cached) 00:02:40.989 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:40.989 Library m found: YES 00:02:40.989 Library numa found: YES 00:02:40.989 Has header "numaif.h" : YES 00:02:40.989 Library fdt found: NO 00:02:40.989 Library execinfo found: NO 00:02:40.989 Has header "execinfo.h" : YES 00:02:40.989 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:40.989 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:40.989 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:40.989 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:40.989 Run-time dependency openssl found: YES 3.1.1 00:02:40.989 Run-time dependency libpcap found: YES 1.10.4 00:02:40.989 Has header "pcap.h" with dependency libpcap: YES 00:02:40.989 Compiler for C supports arguments -Wcast-qual: YES 00:02:40.989 Compiler for C supports arguments -Wdeprecated: YES 00:02:40.989 Compiler for C supports arguments -Wformat: YES 00:02:40.989 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:40.989 Compiler for C supports arguments -Wformat-security: NO 00:02:40.989 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:40.989 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:40.989 Compiler for C supports arguments -Wnested-externs: YES 00:02:40.989 Compiler for C supports arguments -Wold-style-definition: YES 00:02:40.989 Compiler for C supports arguments -Wpointer-arith: YES 00:02:40.989 Compiler for C supports arguments -Wsign-compare: YES 00:02:40.989 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:40.989 Compiler for C supports arguments -Wundef: YES 00:02:40.989 Compiler for C supports arguments -Wwrite-strings: YES 00:02:40.989 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:40.989 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:40.989 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:40.989 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:40.989 Program objdump found: YES (/usr/bin/objdump) 00:02:40.989 Compiler for C supports arguments -mavx512f: YES 00:02:40.989 Checking if "AVX512 checking" compiles: YES 00:02:40.989 Fetching value of define "__SSE4_2__" : 1 00:02:40.989 Fetching value of define "__AES__" : 1 00:02:40.989 Fetching value of define "__AVX__" : 1 00:02:40.989 Fetching value of define "__AVX2__" : 1 00:02:40.989 Fetching value of define "__AVX512BW__" : 1 00:02:40.989 Fetching value of define "__AVX512CD__" : 1 00:02:40.989 Fetching value of define "__AVX512DQ__" : 1 00:02:40.989 Fetching value of define "__AVX512F__" : 1 00:02:40.989 Fetching value of define "__AVX512VL__" : 1 00:02:40.989 Fetching value of define "__PCLMUL__" : 1 00:02:40.989 Fetching value of define "__RDRND__" : 1 00:02:40.989 Fetching value of define "__RDSEED__" : 1 00:02:40.989 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:40.989 Fetching value of define "__znver1__" : (undefined) 00:02:40.989 Fetching value of define "__znver2__" : (undefined) 00:02:40.989 Fetching value of define "__znver3__" : (undefined) 00:02:40.989 Fetching value of define "__znver4__" : (undefined) 00:02:40.989 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:40.989 Message: lib/log: Defining dependency "log" 00:02:40.989 Message: lib/kvargs: Defining dependency "kvargs" 00:02:40.989 Message: lib/telemetry: Defining dependency "telemetry" 00:02:40.989 Checking for function "getentropy" : NO 00:02:40.989 Message: lib/eal: Defining dependency "eal" 00:02:40.989 Message: lib/ring: Defining dependency "ring" 00:02:40.990 Message: lib/rcu: Defining dependency "rcu" 00:02:40.990 Message: lib/mempool: Defining dependency "mempool" 00:02:40.990 Message: lib/mbuf: Defining dependency "mbuf" 00:02:40.990 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:40.990 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.990 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:40.990 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:40.990 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:40.990 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:40.990 Compiler for C supports arguments -mpclmul: YES 00:02:40.990 Compiler for C supports arguments -maes: YES 00:02:40.990 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:40.990 Compiler for C supports arguments -mavx512bw: YES 00:02:40.990 Compiler for C supports arguments -mavx512dq: YES 00:02:40.990 Compiler for C supports arguments -mavx512vl: YES 00:02:40.990 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:40.990 Compiler for C supports arguments -mavx2: YES 00:02:40.990 Compiler for C supports arguments -mavx: YES 00:02:40.990 Message: lib/net: Defining dependency "net" 00:02:40.990 Message: lib/meter: Defining dependency "meter" 00:02:40.990 Message: lib/ethdev: Defining dependency "ethdev" 00:02:40.990 Message: lib/pci: Defining dependency "pci" 00:02:40.990 Message: lib/cmdline: Defining dependency "cmdline" 00:02:40.990 Message: lib/hash: Defining dependency "hash" 00:02:40.990 Message: lib/timer: Defining dependency "timer" 00:02:40.990 Message: lib/compressdev: Defining dependency "compressdev" 00:02:40.990 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:40.990 Message: lib/dmadev: Defining dependency "dmadev" 00:02:40.990 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:40.990 Message: lib/power: Defining dependency "power" 00:02:40.990 Message: lib/reorder: Defining dependency "reorder" 00:02:40.990 Message: lib/security: Defining dependency "security" 00:02:40.990 Has header "linux/userfaultfd.h" : YES 00:02:40.990 Has header "linux/vduse.h" : YES 00:02:40.990 Message: lib/vhost: Defining dependency "vhost" 00:02:40.990 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:40.990 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:40.990 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:40.990 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:40.990 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:40.990 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:40.990 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:40.990 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:40.990 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:40.990 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:40.990 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:40.990 Configuring doxy-api-html.conf using configuration 00:02:40.990 Configuring doxy-api-man.conf using configuration 00:02:40.990 Program mandb found: YES (/usr/bin/mandb) 00:02:40.990 Program sphinx-build found: NO 00:02:40.990 Configuring rte_build_config.h using configuration 00:02:40.990 Message: 00:02:40.990 ================= 00:02:40.990 Applications Enabled 00:02:40.990 ================= 00:02:40.990 00:02:40.990 apps: 00:02:40.990 00:02:40.990 00:02:40.990 Message: 00:02:40.990 ================= 00:02:40.990 Libraries Enabled 00:02:40.990 ================= 00:02:40.990 00:02:40.990 libs: 00:02:40.990 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:40.990 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:40.990 cryptodev, dmadev, power, reorder, security, vhost, 00:02:40.990 00:02:40.990 Message: 00:02:40.990 =============== 00:02:40.990 Drivers Enabled 00:02:40.990 =============== 00:02:40.990 00:02:40.990 common: 00:02:40.990 00:02:40.990 bus: 00:02:40.990 pci, vdev, 00:02:40.990 mempool: 00:02:40.990 ring, 00:02:40.990 dma: 00:02:40.990 00:02:40.990 net: 00:02:40.990 00:02:40.990 crypto: 00:02:40.990 00:02:40.990 compress: 00:02:40.990 00:02:40.990 vdpa: 00:02:40.990 00:02:40.990 00:02:40.990 Message: 00:02:40.990 ================= 00:02:40.990 Content Skipped 00:02:40.990 ================= 00:02:40.990 00:02:40.990 apps: 00:02:40.990 dumpcap: explicitly disabled via build config 00:02:40.990 graph: explicitly disabled via build config 00:02:40.990 pdump: explicitly disabled via build config 00:02:40.990 proc-info: explicitly disabled via build config 00:02:40.990 test-acl: explicitly disabled via build config 00:02:40.990 test-bbdev: explicitly disabled via build config 00:02:40.990 test-cmdline: explicitly disabled via build config 00:02:40.990 test-compress-perf: explicitly disabled via build config 00:02:40.990 test-crypto-perf: explicitly disabled via build config 00:02:40.990 test-dma-perf: explicitly disabled via build config 00:02:40.990 test-eventdev: explicitly disabled via build config 00:02:40.990 test-fib: explicitly disabled via build config 00:02:40.990 test-flow-perf: explicitly disabled via build config 00:02:40.990 test-gpudev: explicitly disabled via build config 00:02:40.990 test-mldev: explicitly disabled via build config 00:02:40.990 test-pipeline: explicitly disabled via build config 00:02:40.990 test-pmd: explicitly disabled via build config 00:02:40.990 test-regex: explicitly disabled via build config 00:02:40.990 test-sad: explicitly disabled via build config 00:02:40.990 test-security-perf: explicitly disabled via build config 00:02:40.990 00:02:40.990 libs: 00:02:40.990 argparse: explicitly disabled via build config 00:02:40.990 metrics: explicitly disabled via build config 00:02:40.990 acl: explicitly disabled via build config 00:02:40.990 bbdev: explicitly disabled via build config 00:02:40.990 bitratestats: explicitly disabled via build config 00:02:40.990 bpf: explicitly disabled via build config 00:02:40.990 cfgfile: explicitly disabled via build config 00:02:40.990 distributor: explicitly disabled via build config 00:02:40.990 efd: explicitly disabled via build config 00:02:40.990 eventdev: explicitly disabled via build config 00:02:40.990 dispatcher: explicitly disabled via build config 00:02:40.990 gpudev: explicitly disabled via build config 00:02:40.990 gro: explicitly disabled via build config 00:02:40.990 gso: explicitly disabled via build config 00:02:40.990 ip_frag: explicitly disabled via build config 00:02:40.990 jobstats: explicitly disabled via build config 00:02:40.990 latencystats: explicitly disabled via build config 00:02:40.990 lpm: explicitly disabled via build config 00:02:40.990 member: explicitly disabled via build config 00:02:40.990 pcapng: explicitly disabled via build config 00:02:40.990 rawdev: explicitly disabled via build config 00:02:40.990 regexdev: explicitly disabled via build config 00:02:40.990 mldev: explicitly disabled via build config 00:02:40.990 rib: explicitly disabled via build config 00:02:40.990 sched: explicitly disabled via build config 00:02:40.990 stack: explicitly disabled via build config 00:02:40.990 ipsec: explicitly disabled via build config 00:02:40.990 pdcp: explicitly disabled via build config 00:02:40.990 fib: explicitly disabled via build config 00:02:40.990 port: explicitly disabled via build config 00:02:40.990 pdump: explicitly disabled via build config 00:02:40.990 table: explicitly disabled via build config 00:02:40.990 pipeline: explicitly disabled via build config 00:02:40.990 graph: explicitly disabled via build config 00:02:40.990 node: explicitly disabled via build config 00:02:40.990 00:02:40.990 drivers: 00:02:40.990 common/cpt: not in enabled drivers build config 00:02:40.990 common/dpaax: not in enabled drivers build config 00:02:40.990 common/iavf: not in enabled drivers build config 00:02:40.990 common/idpf: not in enabled drivers build config 00:02:40.990 common/ionic: not in enabled drivers build config 00:02:40.990 common/mvep: not in enabled drivers build config 00:02:40.990 common/octeontx: not in enabled drivers build config 00:02:40.990 bus/auxiliary: not in enabled drivers build config 00:02:40.990 bus/cdx: not in enabled drivers build config 00:02:40.990 bus/dpaa: not in enabled drivers build config 00:02:40.990 bus/fslmc: not in enabled drivers build config 00:02:40.990 bus/ifpga: not in enabled drivers build config 00:02:40.990 bus/platform: not in enabled drivers build config 00:02:40.990 bus/uacce: not in enabled drivers build config 00:02:40.990 bus/vmbus: not in enabled drivers build config 00:02:40.990 common/cnxk: not in enabled drivers build config 00:02:40.990 common/mlx5: not in enabled drivers build config 00:02:40.990 common/nfp: not in enabled drivers build config 00:02:40.990 common/nitrox: not in enabled drivers build config 00:02:40.990 common/qat: not in enabled drivers build config 00:02:40.990 common/sfc_efx: not in enabled drivers build config 00:02:40.990 mempool/bucket: not in enabled drivers build config 00:02:40.990 mempool/cnxk: not in enabled drivers build config 00:02:40.990 mempool/dpaa: not in enabled drivers build config 00:02:40.990 mempool/dpaa2: not in enabled drivers build config 00:02:40.990 mempool/octeontx: not in enabled drivers build config 00:02:40.990 mempool/stack: not in enabled drivers build config 00:02:40.990 dma/cnxk: not in enabled drivers build config 00:02:40.990 dma/dpaa: not in enabled drivers build config 00:02:40.990 dma/dpaa2: not in enabled drivers build config 00:02:40.990 dma/hisilicon: not in enabled drivers build config 00:02:40.990 dma/idxd: not in enabled drivers build config 00:02:40.990 dma/ioat: not in enabled drivers build config 00:02:40.990 dma/skeleton: not in enabled drivers build config 00:02:40.990 net/af_packet: not in enabled drivers build config 00:02:40.990 net/af_xdp: not in enabled drivers build config 00:02:40.990 net/ark: not in enabled drivers build config 00:02:40.990 net/atlantic: not in enabled drivers build config 00:02:40.990 net/avp: not in enabled drivers build config 00:02:40.990 net/axgbe: not in enabled drivers build config 00:02:40.990 net/bnx2x: not in enabled drivers build config 00:02:40.990 net/bnxt: not in enabled drivers build config 00:02:40.990 net/bonding: not in enabled drivers build config 00:02:40.990 net/cnxk: not in enabled drivers build config 00:02:40.991 net/cpfl: not in enabled drivers build config 00:02:40.991 net/cxgbe: not in enabled drivers build config 00:02:40.991 net/dpaa: not in enabled drivers build config 00:02:40.991 net/dpaa2: not in enabled drivers build config 00:02:40.991 net/e1000: not in enabled drivers build config 00:02:40.991 net/ena: not in enabled drivers build config 00:02:40.991 net/enetc: not in enabled drivers build config 00:02:40.991 net/enetfec: not in enabled drivers build config 00:02:40.991 net/enic: not in enabled drivers build config 00:02:40.991 net/failsafe: not in enabled drivers build config 00:02:40.991 net/fm10k: not in enabled drivers build config 00:02:40.991 net/gve: not in enabled drivers build config 00:02:40.991 net/hinic: not in enabled drivers build config 00:02:40.991 net/hns3: not in enabled drivers build config 00:02:40.991 net/i40e: not in enabled drivers build config 00:02:40.991 net/iavf: not in enabled drivers build config 00:02:40.991 net/ice: not in enabled drivers build config 00:02:40.991 net/idpf: not in enabled drivers build config 00:02:40.991 net/igc: not in enabled drivers build config 00:02:40.991 net/ionic: not in enabled drivers build config 00:02:40.991 net/ipn3ke: not in enabled drivers build config 00:02:40.991 net/ixgbe: not in enabled drivers build config 00:02:40.991 net/mana: not in enabled drivers build config 00:02:40.991 net/memif: not in enabled drivers build config 00:02:40.991 net/mlx4: not in enabled drivers build config 00:02:40.991 net/mlx5: not in enabled drivers build config 00:02:40.991 net/mvneta: not in enabled drivers build config 00:02:40.991 net/mvpp2: not in enabled drivers build config 00:02:40.991 net/netvsc: not in enabled drivers build config 00:02:40.991 net/nfb: not in enabled drivers build config 00:02:40.991 net/nfp: not in enabled drivers build config 00:02:40.991 net/ngbe: not in enabled drivers build config 00:02:40.991 net/null: not in enabled drivers build config 00:02:40.991 net/octeontx: not in enabled drivers build config 00:02:40.991 net/octeon_ep: not in enabled drivers build config 00:02:40.991 net/pcap: not in enabled drivers build config 00:02:40.991 net/pfe: not in enabled drivers build config 00:02:40.991 net/qede: not in enabled drivers build config 00:02:40.991 net/ring: not in enabled drivers build config 00:02:40.991 net/sfc: not in enabled drivers build config 00:02:40.991 net/softnic: not in enabled drivers build config 00:02:40.991 net/tap: not in enabled drivers build config 00:02:40.991 net/thunderx: not in enabled drivers build config 00:02:40.991 net/txgbe: not in enabled drivers build config 00:02:40.991 net/vdev_netvsc: not in enabled drivers build config 00:02:40.991 net/vhost: not in enabled drivers build config 00:02:40.991 net/virtio: not in enabled drivers build config 00:02:40.991 net/vmxnet3: not in enabled drivers build config 00:02:40.991 raw/*: missing internal dependency, "rawdev" 00:02:40.991 crypto/armv8: not in enabled drivers build config 00:02:40.991 crypto/bcmfs: not in enabled drivers build config 00:02:40.991 crypto/caam_jr: not in enabled drivers build config 00:02:40.991 crypto/ccp: not in enabled drivers build config 00:02:40.991 crypto/cnxk: not in enabled drivers build config 00:02:40.991 crypto/dpaa_sec: not in enabled drivers build config 00:02:40.991 crypto/dpaa2_sec: not in enabled drivers build config 00:02:40.991 crypto/ipsec_mb: not in enabled drivers build config 00:02:40.991 crypto/mlx5: not in enabled drivers build config 00:02:40.991 crypto/mvsam: not in enabled drivers build config 00:02:40.991 crypto/nitrox: not in enabled drivers build config 00:02:40.991 crypto/null: not in enabled drivers build config 00:02:40.991 crypto/octeontx: not in enabled drivers build config 00:02:40.991 crypto/openssl: not in enabled drivers build config 00:02:40.991 crypto/scheduler: not in enabled drivers build config 00:02:40.991 crypto/uadk: not in enabled drivers build config 00:02:40.991 crypto/virtio: not in enabled drivers build config 00:02:40.991 compress/isal: not in enabled drivers build config 00:02:40.991 compress/mlx5: not in enabled drivers build config 00:02:40.991 compress/nitrox: not in enabled drivers build config 00:02:40.991 compress/octeontx: not in enabled drivers build config 00:02:40.991 compress/zlib: not in enabled drivers build config 00:02:40.991 regex/*: missing internal dependency, "regexdev" 00:02:40.991 ml/*: missing internal dependency, "mldev" 00:02:40.991 vdpa/ifc: not in enabled drivers build config 00:02:40.991 vdpa/mlx5: not in enabled drivers build config 00:02:40.991 vdpa/nfp: not in enabled drivers build config 00:02:40.991 vdpa/sfc: not in enabled drivers build config 00:02:40.991 event/*: missing internal dependency, "eventdev" 00:02:40.991 baseband/*: missing internal dependency, "bbdev" 00:02:40.991 gpu/*: missing internal dependency, "gpudev" 00:02:40.991 00:02:40.991 00:02:40.991 Build targets in project: 85 00:02:40.991 00:02:40.991 DPDK 24.03.0 00:02:40.991 00:02:40.991 User defined options 00:02:40.991 buildtype : debug 00:02:40.991 default_library : shared 00:02:40.991 libdir : lib 00:02:40.991 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:40.991 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:40.991 c_link_args : 00:02:40.991 cpu_instruction_set: native 00:02:40.991 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:40.991 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:40.991 enable_docs : false 00:02:40.991 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:40.991 enable_kmods : false 00:02:40.991 max_lcores : 128 00:02:40.991 tests : false 00:02:40.991 00:02:40.991 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:41.249 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:41.511 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:41.511 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:41.511 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:41.511 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:41.511 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:41.512 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:41.512 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:41.512 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:41.512 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:41.512 [10/268] Linking static target lib/librte_kvargs.a 00:02:41.512 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:41.512 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:41.512 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:41.512 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:41.512 [15/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:41.512 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:41.512 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:41.512 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:41.512 [19/268] Linking static target lib/librte_log.a 00:02:41.773 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:41.773 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:41.773 [22/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:41.773 [23/268] Linking static target lib/librte_pci.a 00:02:41.773 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:41.773 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:42.035 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:42.035 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:42.035 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:42.035 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:42.035 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:42.035 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:42.035 [32/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:42.035 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:42.035 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:42.035 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:42.035 [36/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:42.035 [37/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:42.035 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:42.035 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:42.035 [40/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:42.035 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:42.035 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:42.035 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:42.035 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:42.035 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:42.035 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:42.035 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:42.035 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:42.035 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:42.035 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:42.035 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:42.035 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:42.035 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:42.035 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:42.035 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:42.035 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:42.035 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:42.035 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:42.035 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:42.035 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:42.035 [61/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:42.035 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:42.035 [63/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:42.035 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:42.035 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:42.035 [66/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:42.035 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:42.035 [68/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:42.035 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:42.035 [70/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:42.035 [71/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:42.035 [72/268] Linking static target lib/librte_ring.a 00:02:42.035 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:42.035 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:42.035 [75/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:42.035 [76/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:42.035 [77/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:42.035 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:42.035 [79/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:42.035 [80/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:42.035 [81/268] Linking static target lib/librte_meter.a 00:02:42.035 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:42.035 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:42.035 [84/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:42.035 [85/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:42.035 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:42.035 [87/268] Linking static target lib/librte_telemetry.a 00:02:42.035 [88/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:42.035 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:42.035 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:42.035 [91/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:42.035 [92/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:42.035 [93/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.036 [94/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:42.036 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:42.036 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:42.294 [97/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:42.294 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:42.294 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:42.294 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:42.294 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:42.294 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:42.294 [103/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:42.294 [104/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:42.294 [105/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.294 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:42.294 [107/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:42.294 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:42.294 [109/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:42.294 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:42.294 [111/268] Linking static target lib/librte_mempool.a 00:02:42.294 [112/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:42.294 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:42.294 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:42.294 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:42.294 [116/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:42.294 [117/268] Linking static target lib/librte_rcu.a 00:02:42.294 [118/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:42.294 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:42.294 [120/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:42.294 [121/268] Linking static target lib/librte_net.a 00:02:42.294 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:42.294 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:42.294 [124/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:42.294 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:42.294 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:42.294 [127/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:42.294 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:42.294 [129/268] Linking static target lib/librte_eal.a 00:02:42.294 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:42.294 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:42.294 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:42.294 [133/268] Linking static target lib/librte_cmdline.a 00:02:42.294 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:42.294 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.294 [136/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.294 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:42.294 [138/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:42.553 [139/268] Linking static target lib/librte_mbuf.a 00:02:42.553 [140/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:42.553 [141/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:42.553 [142/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.553 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:42.553 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:42.553 [145/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:42.553 [146/268] Linking static target lib/librte_timer.a 00:02:42.553 [147/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:42.553 [148/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:42.553 [149/268] Linking target lib/librte_log.so.24.1 00:02:42.553 [150/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.553 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:42.553 [152/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:42.553 [153/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.553 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:42.553 [155/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:42.553 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:42.553 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:42.553 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:42.553 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:42.553 [160/268] Linking static target lib/librte_dmadev.a 00:02:42.553 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:42.553 [162/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:42.553 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:42.553 [164/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:42.553 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:42.553 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:42.553 [167/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:42.553 [168/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:42.553 [169/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.553 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:42.553 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:42.553 [172/268] Linking target lib/librte_kvargs.so.24.1 00:02:42.553 [173/268] Linking target lib/librte_telemetry.so.24.1 00:02:42.553 [174/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:42.812 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:42.812 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:42.812 [177/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:42.812 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:42.812 [179/268] Linking static target lib/librte_power.a 00:02:42.812 [180/268] Linking static target lib/librte_compressdev.a 00:02:42.812 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:42.812 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:42.812 [183/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:42.812 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:42.812 [185/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:42.812 [186/268] Linking static target lib/librte_reorder.a 00:02:42.812 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:42.812 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:42.812 [189/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:42.812 [190/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:42.812 [191/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:42.812 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:42.812 [193/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:42.812 [194/268] Linking static target lib/librte_security.a 00:02:42.812 [195/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:42.812 [196/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:42.812 [197/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:42.812 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:42.812 [199/268] Linking static target drivers/librte_bus_vdev.a 00:02:42.812 [200/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.812 [201/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:42.812 [202/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:42.812 [203/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:43.070 [204/268] Linking static target lib/librte_cryptodev.a 00:02:43.070 [205/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:43.070 [206/268] Linking static target lib/librte_hash.a 00:02:43.070 [207/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:43.070 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:43.070 [209/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.070 [210/268] Linking static target drivers/librte_bus_pci.a 00:02:43.070 [211/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:43.070 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:43.070 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:43.070 [214/268] Linking static target drivers/librte_mempool_ring.a 00:02:43.070 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.070 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.070 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.070 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.329 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:43.329 [220/268] Linking static target lib/librte_ethdev.a 00:02:43.329 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.329 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.329 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:43.585 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.585 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.585 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.842 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.774 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:44.774 [229/268] Linking static target lib/librte_vhost.a 00:02:44.774 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.669 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.845 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.776 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.776 [234/268] Linking target lib/librte_eal.so.24.1 00:02:51.776 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:51.776 [236/268] Linking target lib/librte_meter.so.24.1 00:02:51.776 [237/268] Linking target lib/librte_pci.so.24.1 00:02:51.776 [238/268] Linking target lib/librte_timer.so.24.1 00:02:51.776 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:51.776 [240/268] Linking target lib/librte_ring.so.24.1 00:02:51.776 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:52.034 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:52.034 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:52.034 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:52.034 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:52.034 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:52.034 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:52.034 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:52.034 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:52.034 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:52.034 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:52.034 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:52.034 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:52.291 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:52.291 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:52.291 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:52.291 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:52.291 [258/268] Linking target lib/librte_net.so.24.1 00:02:52.549 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:52.549 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:52.549 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:52.549 [262/268] Linking target lib/librte_hash.so.24.1 00:02:52.549 [263/268] Linking target lib/librte_security.so.24.1 00:02:52.549 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:52.549 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:52.549 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:52.807 [267/268] Linking target lib/librte_power.so.24.1 00:02:52.807 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:52.807 INFO: autodetecting backend as ninja 00:02:52.807 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:03:05.187 CC lib/ut_mock/mock.o 00:03:05.187 CC lib/ut/ut.o 00:03:05.187 CC lib/log/log.o 00:03:05.187 CC lib/log/log_flags.o 00:03:05.187 CC lib/log/log_deprecated.o 00:03:05.187 LIB libspdk_ut_mock.a 00:03:05.187 LIB libspdk_ut.a 00:03:05.187 LIB libspdk_log.a 00:03:05.187 SO libspdk_ut_mock.so.6.0 00:03:05.187 SO libspdk_ut.so.2.0 00:03:05.188 SO libspdk_log.so.7.1 00:03:05.188 SYMLINK libspdk_ut_mock.so 00:03:05.188 SYMLINK libspdk_ut.so 00:03:05.188 SYMLINK libspdk_log.so 00:03:05.188 CC lib/dma/dma.o 00:03:05.188 CC lib/util/base64.o 00:03:05.188 CC lib/util/crc16.o 00:03:05.188 CC lib/util/bit_array.o 00:03:05.188 CC lib/util/cpuset.o 00:03:05.188 CC lib/util/crc32.o 00:03:05.188 CC lib/util/crc32c.o 00:03:05.188 CC lib/util/crc32_ieee.o 00:03:05.188 CC lib/util/dif.o 00:03:05.188 CC lib/util/crc64.o 00:03:05.188 CC lib/util/fd.o 00:03:05.188 CC lib/util/fd_group.o 00:03:05.188 CC lib/util/file.o 00:03:05.188 CC lib/util/hexlify.o 00:03:05.188 CC lib/util/iov.o 00:03:05.188 CC lib/util/net.o 00:03:05.188 CC lib/util/math.o 00:03:05.188 CC lib/util/pipe.o 00:03:05.188 CC lib/util/strerror_tls.o 00:03:05.188 CC lib/util/string.o 00:03:05.188 CC lib/util/uuid.o 00:03:05.188 CC lib/util/xor.o 00:03:05.188 CC lib/util/zipf.o 00:03:05.188 CC lib/util/md5.o 00:03:05.188 CC lib/ioat/ioat.o 00:03:05.188 CXX lib/trace_parser/trace.o 00:03:05.188 CC lib/vfio_user/host/vfio_user_pci.o 00:03:05.188 CC lib/vfio_user/host/vfio_user.o 00:03:05.188 LIB libspdk_dma.a 00:03:05.188 SO libspdk_dma.so.5.0 00:03:05.188 LIB libspdk_ioat.a 00:03:05.188 SYMLINK libspdk_dma.so 00:03:05.188 SO libspdk_ioat.so.7.0 00:03:05.188 LIB libspdk_vfio_user.a 00:03:05.188 SYMLINK libspdk_ioat.so 00:03:05.188 SO libspdk_vfio_user.so.5.0 00:03:05.188 SYMLINK libspdk_vfio_user.so 00:03:05.188 LIB libspdk_util.a 00:03:05.188 SO libspdk_util.so.10.1 00:03:05.188 SYMLINK libspdk_util.so 00:03:05.188 LIB libspdk_trace_parser.a 00:03:05.188 SO libspdk_trace_parser.so.6.0 00:03:05.188 SYMLINK libspdk_trace_parser.so 00:03:05.188 CC lib/json/json_parse.o 00:03:05.188 CC lib/env_dpdk/memory.o 00:03:05.188 CC lib/json/json_write.o 00:03:05.188 CC lib/env_dpdk/env.o 00:03:05.188 CC lib/json/json_util.o 00:03:05.188 CC lib/env_dpdk/threads.o 00:03:05.188 CC lib/env_dpdk/pci.o 00:03:05.188 CC lib/env_dpdk/init.o 00:03:05.188 CC lib/env_dpdk/pci_ioat.o 00:03:05.188 CC lib/env_dpdk/pci_virtio.o 00:03:05.188 CC lib/env_dpdk/pci_vmd.o 00:03:05.188 CC lib/env_dpdk/pci_idxd.o 00:03:05.188 CC lib/env_dpdk/pci_event.o 00:03:05.188 CC lib/env_dpdk/sigbus_handler.o 00:03:05.189 CC lib/conf/conf.o 00:03:05.189 CC lib/env_dpdk/pci_dpdk.o 00:03:05.189 CC lib/vmd/led.o 00:03:05.189 CC lib/vmd/vmd.o 00:03:05.189 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:05.189 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:05.189 CC lib/idxd/idxd.o 00:03:05.189 CC lib/idxd/idxd_user.o 00:03:05.189 CC lib/idxd/idxd_kernel.o 00:03:05.189 CC lib/rdma_utils/rdma_utils.o 00:03:05.447 LIB libspdk_conf.a 00:03:05.447 LIB libspdk_rdma_utils.a 00:03:05.447 SO libspdk_conf.so.6.0 00:03:05.704 SO libspdk_rdma_utils.so.1.0 00:03:05.704 LIB libspdk_json.a 00:03:05.704 SYMLINK libspdk_conf.so 00:03:05.704 SO libspdk_json.so.6.0 00:03:05.704 SYMLINK libspdk_rdma_utils.so 00:03:05.704 SYMLINK libspdk_json.so 00:03:05.704 LIB libspdk_idxd.a 00:03:05.704 SO libspdk_idxd.so.12.1 00:03:05.962 LIB libspdk_vmd.a 00:03:05.962 SO libspdk_vmd.so.6.0 00:03:05.962 SYMLINK libspdk_idxd.so 00:03:05.962 CC lib/rdma_provider/common.o 00:03:05.962 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:05.962 SYMLINK libspdk_vmd.so 00:03:05.962 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:05.962 CC lib/jsonrpc/jsonrpc_server.o 00:03:05.962 CC lib/jsonrpc/jsonrpc_client.o 00:03:05.962 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:06.220 LIB libspdk_rdma_provider.a 00:03:06.220 SO libspdk_rdma_provider.so.7.0 00:03:06.220 SYMLINK libspdk_rdma_provider.so 00:03:06.220 LIB libspdk_jsonrpc.a 00:03:06.220 SO libspdk_jsonrpc.so.6.0 00:03:06.220 SYMLINK libspdk_jsonrpc.so 00:03:06.477 LIB libspdk_env_dpdk.a 00:03:06.477 SO libspdk_env_dpdk.so.15.1 00:03:06.477 SYMLINK libspdk_env_dpdk.so 00:03:06.477 CC lib/rpc/rpc.o 00:03:06.735 LIB libspdk_rpc.a 00:03:06.735 SO libspdk_rpc.so.6.0 00:03:06.992 SYMLINK libspdk_rpc.so 00:03:07.249 CC lib/notify/notify.o 00:03:07.249 CC lib/notify/notify_rpc.o 00:03:07.249 CC lib/trace/trace_flags.o 00:03:07.249 CC lib/trace/trace.o 00:03:07.249 CC lib/trace/trace_rpc.o 00:03:07.249 CC lib/keyring/keyring.o 00:03:07.249 CC lib/keyring/keyring_rpc.o 00:03:07.249 LIB libspdk_notify.a 00:03:07.249 SO libspdk_notify.so.6.0 00:03:07.249 SYMLINK libspdk_notify.so 00:03:07.249 LIB libspdk_keyring.a 00:03:07.507 LIB libspdk_trace.a 00:03:07.507 SO libspdk_keyring.so.2.0 00:03:07.507 SO libspdk_trace.so.11.0 00:03:07.507 SYMLINK libspdk_keyring.so 00:03:07.507 SYMLINK libspdk_trace.so 00:03:07.764 CC lib/thread/thread.o 00:03:07.764 CC lib/thread/iobuf.o 00:03:07.764 CC lib/sock/sock.o 00:03:07.764 CC lib/sock/sock_rpc.o 00:03:08.021 LIB libspdk_sock.a 00:03:08.021 SO libspdk_sock.so.10.0 00:03:08.278 SYMLINK libspdk_sock.so 00:03:08.535 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:08.535 CC lib/nvme/nvme_ctrlr.o 00:03:08.535 CC lib/nvme/nvme_ns_cmd.o 00:03:08.535 CC lib/nvme/nvme_fabric.o 00:03:08.535 CC lib/nvme/nvme_ns.o 00:03:08.535 CC lib/nvme/nvme_pcie.o 00:03:08.535 CC lib/nvme/nvme_pcie_common.o 00:03:08.535 CC lib/nvme/nvme.o 00:03:08.535 CC lib/nvme/nvme_qpair.o 00:03:08.535 CC lib/nvme/nvme_quirks.o 00:03:08.535 CC lib/nvme/nvme_transport.o 00:03:08.535 CC lib/nvme/nvme_discovery.o 00:03:08.535 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:08.535 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:08.535 CC lib/nvme/nvme_tcp.o 00:03:08.535 CC lib/nvme/nvme_opal.o 00:03:08.535 CC lib/nvme/nvme_io_msg.o 00:03:08.535 CC lib/nvme/nvme_poll_group.o 00:03:08.535 CC lib/nvme/nvme_zns.o 00:03:08.535 CC lib/nvme/nvme_stubs.o 00:03:08.535 CC lib/nvme/nvme_auth.o 00:03:08.535 CC lib/nvme/nvme_cuse.o 00:03:08.535 CC lib/nvme/nvme_vfio_user.o 00:03:08.535 CC lib/nvme/nvme_rdma.o 00:03:08.792 LIB libspdk_thread.a 00:03:08.792 SO libspdk_thread.so.11.0 00:03:09.049 SYMLINK libspdk_thread.so 00:03:09.306 CC lib/fsdev/fsdev.o 00:03:09.306 CC lib/fsdev/fsdev_io.o 00:03:09.306 CC lib/fsdev/fsdev_rpc.o 00:03:09.306 CC lib/blob/blobstore.o 00:03:09.306 CC lib/blob/request.o 00:03:09.306 CC lib/blob/zeroes.o 00:03:09.306 CC lib/blob/blob_bs_dev.o 00:03:09.306 CC lib/accel/accel.o 00:03:09.306 CC lib/init/subsystem.o 00:03:09.306 CC lib/accel/accel_rpc.o 00:03:09.306 CC lib/init/json_config.o 00:03:09.306 CC lib/virtio/virtio_vhost_user.o 00:03:09.306 CC lib/virtio/virtio.o 00:03:09.306 CC lib/vfu_tgt/tgt_endpoint.o 00:03:09.306 CC lib/accel/accel_sw.o 00:03:09.306 CC lib/init/subsystem_rpc.o 00:03:09.306 CC lib/virtio/virtio_vfio_user.o 00:03:09.306 CC lib/vfu_tgt/tgt_rpc.o 00:03:09.306 CC lib/init/rpc.o 00:03:09.306 CC lib/virtio/virtio_pci.o 00:03:09.563 LIB libspdk_init.a 00:03:09.563 SO libspdk_init.so.6.0 00:03:09.563 LIB libspdk_virtio.a 00:03:09.563 LIB libspdk_vfu_tgt.a 00:03:09.563 SO libspdk_virtio.so.7.0 00:03:09.563 SYMLINK libspdk_init.so 00:03:09.563 SO libspdk_vfu_tgt.so.3.0 00:03:09.563 SYMLINK libspdk_virtio.so 00:03:09.563 SYMLINK libspdk_vfu_tgt.so 00:03:09.820 LIB libspdk_fsdev.a 00:03:09.820 SO libspdk_fsdev.so.2.0 00:03:09.820 SYMLINK libspdk_fsdev.so 00:03:09.820 CC lib/event/app.o 00:03:09.820 CC lib/event/reactor.o 00:03:09.820 CC lib/event/log_rpc.o 00:03:09.820 CC lib/event/app_rpc.o 00:03:09.820 CC lib/event/scheduler_static.o 00:03:10.078 LIB libspdk_accel.a 00:03:10.078 SO libspdk_accel.so.16.0 00:03:10.078 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:10.078 SYMLINK libspdk_accel.so 00:03:10.078 LIB libspdk_nvme.a 00:03:10.078 LIB libspdk_event.a 00:03:10.336 SO libspdk_event.so.14.0 00:03:10.336 SO libspdk_nvme.so.15.0 00:03:10.336 SYMLINK libspdk_event.so 00:03:10.336 CC lib/bdev/bdev.o 00:03:10.336 CC lib/bdev/part.o 00:03:10.336 CC lib/bdev/bdev_rpc.o 00:03:10.336 CC lib/bdev/bdev_zone.o 00:03:10.336 CC lib/bdev/scsi_nvme.o 00:03:10.594 SYMLINK libspdk_nvme.so 00:03:10.594 LIB libspdk_fuse_dispatcher.a 00:03:10.594 SO libspdk_fuse_dispatcher.so.1.0 00:03:10.594 SYMLINK libspdk_fuse_dispatcher.so 00:03:11.566 LIB libspdk_blob.a 00:03:11.566 SO libspdk_blob.so.12.0 00:03:11.566 SYMLINK libspdk_blob.so 00:03:11.824 CC lib/lvol/lvol.o 00:03:11.824 CC lib/blobfs/blobfs.o 00:03:11.824 CC lib/blobfs/tree.o 00:03:12.390 LIB libspdk_bdev.a 00:03:12.390 SO libspdk_bdev.so.17.0 00:03:12.390 LIB libspdk_blobfs.a 00:03:12.390 LIB libspdk_lvol.a 00:03:12.390 SO libspdk_blobfs.so.11.0 00:03:12.390 SYMLINK libspdk_bdev.so 00:03:12.390 SO libspdk_lvol.so.11.0 00:03:12.390 SYMLINK libspdk_blobfs.so 00:03:12.647 SYMLINK libspdk_lvol.so 00:03:12.647 CC lib/nvmf/ctrlr.o 00:03:12.647 CC lib/nvmf/ctrlr_discovery.o 00:03:12.647 CC lib/nvmf/subsystem.o 00:03:12.647 CC lib/nvmf/ctrlr_bdev.o 00:03:12.647 CC lib/nvmf/nvmf.o 00:03:12.647 CC lib/nvmf/transport.o 00:03:12.647 CC lib/nvmf/nvmf_rpc.o 00:03:12.647 CC lib/nvmf/tcp.o 00:03:12.647 CC lib/nvmf/mdns_server.o 00:03:12.648 CC lib/nvmf/stubs.o 00:03:12.648 CC lib/ublk/ublk.o 00:03:12.648 CC lib/nvmf/vfio_user.o 00:03:12.648 CC lib/ublk/ublk_rpc.o 00:03:12.648 CC lib/nvmf/rdma.o 00:03:12.648 CC lib/nvmf/auth.o 00:03:12.648 CC lib/ftl/ftl_core.o 00:03:12.648 CC lib/ftl/ftl_init.o 00:03:12.648 CC lib/ftl/ftl_debug.o 00:03:12.648 CC lib/ftl/ftl_layout.o 00:03:12.648 CC lib/ftl/ftl_io.o 00:03:12.648 CC lib/ftl/ftl_sb.o 00:03:12.648 CC lib/ftl/ftl_l2p.o 00:03:12.648 CC lib/ftl/ftl_l2p_flat.o 00:03:12.648 CC lib/nbd/nbd.o 00:03:12.648 CC lib/ftl/ftl_nv_cache.o 00:03:12.648 CC lib/nbd/nbd_rpc.o 00:03:12.648 CC lib/ftl/ftl_band.o 00:03:12.648 CC lib/ftl/ftl_band_ops.o 00:03:12.648 CC lib/ftl/ftl_writer.o 00:03:12.648 CC lib/ftl/ftl_rq.o 00:03:12.648 CC lib/ftl/ftl_reloc.o 00:03:12.648 CC lib/ftl/ftl_l2p_cache.o 00:03:12.648 CC lib/scsi/dev.o 00:03:12.648 CC lib/ftl/ftl_p2l.o 00:03:12.648 CC lib/scsi/lun.o 00:03:12.648 CC lib/ftl/ftl_p2l_log.o 00:03:12.648 CC lib/ftl/mngt/ftl_mngt.o 00:03:12.648 CC lib/scsi/port.o 00:03:12.648 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:12.648 CC lib/scsi/scsi.o 00:03:12.648 CC lib/scsi/scsi_bdev.o 00:03:12.648 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:12.648 CC lib/scsi/scsi_pr.o 00:03:12.648 CC lib/scsi/scsi_rpc.o 00:03:12.648 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:12.648 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:12.648 CC lib/scsi/task.o 00:03:12.648 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:12.648 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:12.648 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:12.648 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:12.648 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:12.904 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:12.904 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:12.904 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:12.904 CC lib/ftl/utils/ftl_md.o 00:03:12.904 CC lib/ftl/utils/ftl_bitmap.o 00:03:12.904 CC lib/ftl/utils/ftl_conf.o 00:03:12.904 CC lib/ftl/utils/ftl_mempool.o 00:03:12.904 CC lib/ftl/utils/ftl_property.o 00:03:12.904 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:12.904 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:12.904 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:12.904 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:12.904 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:12.904 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:12.904 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:12.904 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:12.904 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:12.904 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:12.904 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:12.904 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:12.904 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:12.904 CC lib/ftl/base/ftl_base_bdev.o 00:03:12.904 CC lib/ftl/base/ftl_base_dev.o 00:03:12.904 CC lib/ftl/ftl_trace.o 00:03:13.162 LIB libspdk_nbd.a 00:03:13.162 SO libspdk_nbd.so.7.0 00:03:13.419 SYMLINK libspdk_nbd.so 00:03:13.419 LIB libspdk_scsi.a 00:03:13.419 SO libspdk_scsi.so.9.0 00:03:13.419 SYMLINK libspdk_scsi.so 00:03:13.675 LIB libspdk_ublk.a 00:03:13.675 SO libspdk_ublk.so.3.0 00:03:13.675 LIB libspdk_ftl.a 00:03:13.675 SYMLINK libspdk_ublk.so 00:03:13.675 SO libspdk_ftl.so.9.0 00:03:13.675 CC lib/vhost/vhost_rpc.o 00:03:13.675 CC lib/vhost/vhost.o 00:03:13.675 CC lib/vhost/vhost_scsi.o 00:03:13.675 CC lib/vhost/vhost_blk.o 00:03:13.675 CC lib/vhost/rte_vhost_user.o 00:03:13.675 CC lib/iscsi/conn.o 00:03:13.932 CC lib/iscsi/init_grp.o 00:03:13.933 CC lib/iscsi/iscsi.o 00:03:13.933 CC lib/iscsi/param.o 00:03:13.933 CC lib/iscsi/portal_grp.o 00:03:13.933 CC lib/iscsi/tgt_node.o 00:03:13.933 CC lib/iscsi/iscsi_subsystem.o 00:03:13.933 CC lib/iscsi/iscsi_rpc.o 00:03:13.933 CC lib/iscsi/task.o 00:03:13.933 SYMLINK libspdk_ftl.so 00:03:14.499 LIB libspdk_nvmf.a 00:03:14.499 SO libspdk_nvmf.so.20.0 00:03:14.499 LIB libspdk_vhost.a 00:03:14.758 SO libspdk_vhost.so.8.0 00:03:14.758 SYMLINK libspdk_vhost.so 00:03:14.758 SYMLINK libspdk_nvmf.so 00:03:14.758 LIB libspdk_iscsi.a 00:03:14.758 SO libspdk_iscsi.so.8.0 00:03:15.017 SYMLINK libspdk_iscsi.so 00:03:15.583 CC module/vfu_device/vfu_virtio.o 00:03:15.583 CC module/vfu_device/vfu_virtio_blk.o 00:03:15.583 CC module/vfu_device/vfu_virtio_scsi.o 00:03:15.583 CC module/vfu_device/vfu_virtio_fs.o 00:03:15.583 CC module/vfu_device/vfu_virtio_rpc.o 00:03:15.583 CC module/env_dpdk/env_dpdk_rpc.o 00:03:15.583 CC module/blob/bdev/blob_bdev.o 00:03:15.583 CC module/accel/ioat/accel_ioat.o 00:03:15.583 CC module/accel/ioat/accel_ioat_rpc.o 00:03:15.583 CC module/keyring/file/keyring_rpc.o 00:03:15.583 CC module/keyring/file/keyring.o 00:03:15.583 CC module/sock/posix/posix.o 00:03:15.583 CC module/accel/dsa/accel_dsa.o 00:03:15.583 CC module/accel/dsa/accel_dsa_rpc.o 00:03:15.583 CC module/fsdev/aio/fsdev_aio.o 00:03:15.583 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:15.583 CC module/accel/error/accel_error.o 00:03:15.583 CC module/accel/error/accel_error_rpc.o 00:03:15.583 CC module/scheduler/gscheduler/gscheduler.o 00:03:15.583 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:15.583 CC module/fsdev/aio/linux_aio_mgr.o 00:03:15.583 CC module/accel/iaa/accel_iaa.o 00:03:15.583 CC module/accel/iaa/accel_iaa_rpc.o 00:03:15.583 LIB libspdk_env_dpdk_rpc.a 00:03:15.583 CC module/keyring/linux/keyring.o 00:03:15.583 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:15.583 CC module/keyring/linux/keyring_rpc.o 00:03:15.583 SO libspdk_env_dpdk_rpc.so.6.0 00:03:15.841 SYMLINK libspdk_env_dpdk_rpc.so 00:03:15.841 LIB libspdk_keyring_file.a 00:03:15.841 LIB libspdk_scheduler_gscheduler.a 00:03:15.841 LIB libspdk_keyring_linux.a 00:03:15.841 LIB libspdk_scheduler_dpdk_governor.a 00:03:15.841 LIB libspdk_accel_ioat.a 00:03:15.841 SO libspdk_keyring_linux.so.1.0 00:03:15.841 SO libspdk_keyring_file.so.2.0 00:03:15.841 LIB libspdk_scheduler_dynamic.a 00:03:15.841 SO libspdk_scheduler_gscheduler.so.4.0 00:03:15.841 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:15.841 SO libspdk_accel_ioat.so.6.0 00:03:15.841 LIB libspdk_accel_iaa.a 00:03:15.841 LIB libspdk_blob_bdev.a 00:03:15.841 SO libspdk_scheduler_dynamic.so.4.0 00:03:15.841 LIB libspdk_accel_error.a 00:03:15.841 SO libspdk_accel_iaa.so.3.0 00:03:15.841 SYMLINK libspdk_keyring_linux.so 00:03:15.841 SO libspdk_blob_bdev.so.12.0 00:03:15.841 SYMLINK libspdk_scheduler_gscheduler.so 00:03:15.841 SYMLINK libspdk_accel_ioat.so 00:03:15.841 SYMLINK libspdk_keyring_file.so 00:03:15.841 SO libspdk_accel_error.so.2.0 00:03:15.841 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:15.841 LIB libspdk_accel_dsa.a 00:03:15.841 SYMLINK libspdk_scheduler_dynamic.so 00:03:15.841 SYMLINK libspdk_blob_bdev.so 00:03:15.841 SYMLINK libspdk_accel_iaa.so 00:03:15.841 SO libspdk_accel_dsa.so.5.0 00:03:15.841 SYMLINK libspdk_accel_error.so 00:03:16.098 LIB libspdk_vfu_device.a 00:03:16.098 SYMLINK libspdk_accel_dsa.so 00:03:16.098 SO libspdk_vfu_device.so.3.0 00:03:16.098 SYMLINK libspdk_vfu_device.so 00:03:16.098 LIB libspdk_fsdev_aio.a 00:03:16.098 SO libspdk_fsdev_aio.so.1.0 00:03:16.098 LIB libspdk_sock_posix.a 00:03:16.356 SO libspdk_sock_posix.so.6.0 00:03:16.356 SYMLINK libspdk_fsdev_aio.so 00:03:16.356 CC module/bdev/error/vbdev_error_rpc.o 00:03:16.356 CC module/bdev/error/vbdev_error.o 00:03:16.356 CC module/blobfs/bdev/blobfs_bdev.o 00:03:16.356 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:16.356 SYMLINK libspdk_sock_posix.so 00:03:16.356 CC module/bdev/iscsi/bdev_iscsi.o 00:03:16.356 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:16.356 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:16.356 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:16.356 CC module/bdev/gpt/gpt.o 00:03:16.356 CC module/bdev/gpt/vbdev_gpt.o 00:03:16.356 CC module/bdev/passthru/vbdev_passthru.o 00:03:16.356 CC module/bdev/split/vbdev_split_rpc.o 00:03:16.356 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:16.356 CC module/bdev/malloc/bdev_malloc.o 00:03:16.356 CC module/bdev/split/vbdev_split.o 00:03:16.356 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:16.356 CC module/bdev/raid/bdev_raid.o 00:03:16.356 CC module/bdev/delay/vbdev_delay.o 00:03:16.356 CC module/bdev/lvol/vbdev_lvol.o 00:03:16.356 CC module/bdev/null/bdev_null.o 00:03:16.356 CC module/bdev/raid/raid0.o 00:03:16.356 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:16.356 CC module/bdev/raid/bdev_raid_rpc.o 00:03:16.356 CC module/bdev/null/bdev_null_rpc.o 00:03:16.356 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:16.356 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:16.356 CC module/bdev/raid/bdev_raid_sb.o 00:03:16.356 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:16.356 CC module/bdev/raid/raid1.o 00:03:16.356 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:16.356 CC module/bdev/raid/concat.o 00:03:16.356 CC module/bdev/aio/bdev_aio.o 00:03:16.356 CC module/bdev/aio/bdev_aio_rpc.o 00:03:16.356 CC module/bdev/nvme/bdev_nvme.o 00:03:16.356 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:16.357 CC module/bdev/nvme/nvme_rpc.o 00:03:16.357 CC module/bdev/ftl/bdev_ftl.o 00:03:16.357 CC module/bdev/nvme/bdev_mdns_client.o 00:03:16.357 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:16.357 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:16.357 CC module/bdev/nvme/vbdev_opal.o 00:03:16.357 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:16.614 LIB libspdk_blobfs_bdev.a 00:03:16.614 SO libspdk_blobfs_bdev.so.6.0 00:03:16.614 LIB libspdk_bdev_error.a 00:03:16.614 LIB libspdk_bdev_split.a 00:03:16.614 SO libspdk_bdev_error.so.6.0 00:03:16.614 LIB libspdk_bdev_null.a 00:03:16.614 SO libspdk_bdev_split.so.6.0 00:03:16.614 LIB libspdk_bdev_gpt.a 00:03:16.614 LIB libspdk_bdev_zone_block.a 00:03:16.614 SYMLINK libspdk_blobfs_bdev.so 00:03:16.614 SO libspdk_bdev_null.so.6.0 00:03:16.614 SYMLINK libspdk_bdev_error.so 00:03:16.614 SO libspdk_bdev_gpt.so.6.0 00:03:16.614 SO libspdk_bdev_zone_block.so.6.0 00:03:16.615 LIB libspdk_bdev_aio.a 00:03:16.615 LIB libspdk_bdev_iscsi.a 00:03:16.615 LIB libspdk_bdev_passthru.a 00:03:16.615 SYMLINK libspdk_bdev_split.so 00:03:16.615 LIB libspdk_bdev_ftl.a 00:03:16.615 LIB libspdk_bdev_delay.a 00:03:16.615 SO libspdk_bdev_aio.so.6.0 00:03:16.615 SO libspdk_bdev_iscsi.so.6.0 00:03:16.615 SO libspdk_bdev_passthru.so.6.0 00:03:16.872 SYMLINK libspdk_bdev_null.so 00:03:16.872 LIB libspdk_bdev_malloc.a 00:03:16.872 SO libspdk_bdev_ftl.so.6.0 00:03:16.872 SO libspdk_bdev_delay.so.6.0 00:03:16.872 SYMLINK libspdk_bdev_gpt.so 00:03:16.872 SYMLINK libspdk_bdev_zone_block.so 00:03:16.872 SO libspdk_bdev_malloc.so.6.0 00:03:16.872 SYMLINK libspdk_bdev_passthru.so 00:03:16.872 SYMLINK libspdk_bdev_iscsi.so 00:03:16.872 SYMLINK libspdk_bdev_aio.so 00:03:16.872 SYMLINK libspdk_bdev_ftl.so 00:03:16.872 SYMLINK libspdk_bdev_delay.so 00:03:16.872 LIB libspdk_bdev_virtio.a 00:03:16.872 SYMLINK libspdk_bdev_malloc.so 00:03:16.872 LIB libspdk_bdev_lvol.a 00:03:16.872 SO libspdk_bdev_virtio.so.6.0 00:03:16.872 SO libspdk_bdev_lvol.so.6.0 00:03:16.872 SYMLINK libspdk_bdev_virtio.so 00:03:16.872 SYMLINK libspdk_bdev_lvol.so 00:03:17.130 LIB libspdk_bdev_raid.a 00:03:17.130 SO libspdk_bdev_raid.so.6.0 00:03:17.388 SYMLINK libspdk_bdev_raid.so 00:03:18.323 LIB libspdk_bdev_nvme.a 00:03:18.323 SO libspdk_bdev_nvme.so.7.1 00:03:18.323 SYMLINK libspdk_bdev_nvme.so 00:03:18.891 CC module/event/subsystems/iobuf/iobuf.o 00:03:18.891 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:18.891 CC module/event/subsystems/keyring/keyring.o 00:03:18.891 CC module/event/subsystems/sock/sock.o 00:03:18.891 CC module/event/subsystems/scheduler/scheduler.o 00:03:18.891 CC module/event/subsystems/fsdev/fsdev.o 00:03:18.891 CC module/event/subsystems/vmd/vmd.o 00:03:18.891 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:18.891 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:18.891 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:19.149 LIB libspdk_event_keyring.a 00:03:19.149 LIB libspdk_event_sock.a 00:03:19.149 LIB libspdk_event_vmd.a 00:03:19.149 LIB libspdk_event_iobuf.a 00:03:19.149 LIB libspdk_event_fsdev.a 00:03:19.149 LIB libspdk_event_vfu_tgt.a 00:03:19.149 LIB libspdk_event_scheduler.a 00:03:19.149 SO libspdk_event_keyring.so.1.0 00:03:19.149 LIB libspdk_event_vhost_blk.a 00:03:19.149 SO libspdk_event_sock.so.5.0 00:03:19.149 SO libspdk_event_vmd.so.6.0 00:03:19.149 SO libspdk_event_iobuf.so.3.0 00:03:19.149 SO libspdk_event_fsdev.so.1.0 00:03:19.149 SO libspdk_event_vfu_tgt.so.3.0 00:03:19.149 SO libspdk_event_scheduler.so.4.0 00:03:19.149 SO libspdk_event_vhost_blk.so.3.0 00:03:19.149 SYMLINK libspdk_event_keyring.so 00:03:19.149 SYMLINK libspdk_event_sock.so 00:03:19.149 SYMLINK libspdk_event_iobuf.so 00:03:19.149 SYMLINK libspdk_event_vmd.so 00:03:19.149 SYMLINK libspdk_event_vfu_tgt.so 00:03:19.149 SYMLINK libspdk_event_fsdev.so 00:03:19.150 SYMLINK libspdk_event_scheduler.so 00:03:19.150 SYMLINK libspdk_event_vhost_blk.so 00:03:19.408 CC module/event/subsystems/accel/accel.o 00:03:19.666 LIB libspdk_event_accel.a 00:03:19.666 SO libspdk_event_accel.so.6.0 00:03:19.666 SYMLINK libspdk_event_accel.so 00:03:20.233 CC module/event/subsystems/bdev/bdev.o 00:03:20.233 LIB libspdk_event_bdev.a 00:03:20.233 SO libspdk_event_bdev.so.6.0 00:03:20.233 SYMLINK libspdk_event_bdev.so 00:03:20.491 CC module/event/subsystems/scsi/scsi.o 00:03:20.491 CC module/event/subsystems/nbd/nbd.o 00:03:20.491 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:20.491 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:20.491 CC module/event/subsystems/ublk/ublk.o 00:03:20.750 LIB libspdk_event_nbd.a 00:03:20.750 LIB libspdk_event_scsi.a 00:03:20.750 SO libspdk_event_nbd.so.6.0 00:03:20.750 LIB libspdk_event_ublk.a 00:03:20.750 SO libspdk_event_scsi.so.6.0 00:03:20.750 SO libspdk_event_ublk.so.3.0 00:03:20.750 SYMLINK libspdk_event_nbd.so 00:03:20.750 LIB libspdk_event_nvmf.a 00:03:20.750 SYMLINK libspdk_event_scsi.so 00:03:20.750 SYMLINK libspdk_event_ublk.so 00:03:20.750 SO libspdk_event_nvmf.so.6.0 00:03:21.008 SYMLINK libspdk_event_nvmf.so 00:03:21.008 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:21.008 CC module/event/subsystems/iscsi/iscsi.o 00:03:21.266 LIB libspdk_event_vhost_scsi.a 00:03:21.266 LIB libspdk_event_iscsi.a 00:03:21.266 SO libspdk_event_vhost_scsi.so.3.0 00:03:21.266 SO libspdk_event_iscsi.so.6.0 00:03:21.266 SYMLINK libspdk_event_vhost_scsi.so 00:03:21.266 SYMLINK libspdk_event_iscsi.so 00:03:21.525 SO libspdk.so.6.0 00:03:21.525 SYMLINK libspdk.so 00:03:21.783 CXX app/trace/trace.o 00:03:21.783 CC app/trace_record/trace_record.o 00:03:21.783 CC app/spdk_lspci/spdk_lspci.o 00:03:21.783 CC app/spdk_nvme_identify/identify.o 00:03:21.783 TEST_HEADER include/spdk/barrier.h 00:03:21.783 TEST_HEADER include/spdk/accel.h 00:03:21.783 TEST_HEADER include/spdk/assert.h 00:03:21.783 TEST_HEADER include/spdk/accel_module.h 00:03:21.783 TEST_HEADER include/spdk/base64.h 00:03:21.783 CC app/spdk_nvme_discover/discovery_aer.o 00:03:21.783 TEST_HEADER include/spdk/bdev_zone.h 00:03:21.783 TEST_HEADER include/spdk/bdev.h 00:03:21.783 TEST_HEADER include/spdk/bit_array.h 00:03:21.783 TEST_HEADER include/spdk/bit_pool.h 00:03:21.783 TEST_HEADER include/spdk/bdev_module.h 00:03:21.783 CC app/spdk_nvme_perf/perf.o 00:03:21.783 TEST_HEADER include/spdk/blob_bdev.h 00:03:21.783 CC test/rpc_client/rpc_client_test.o 00:03:21.783 TEST_HEADER include/spdk/blobfs.h 00:03:21.783 CC app/spdk_top/spdk_top.o 00:03:21.783 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:21.783 TEST_HEADER include/spdk/blob.h 00:03:21.783 TEST_HEADER include/spdk/config.h 00:03:21.783 TEST_HEADER include/spdk/conf.h 00:03:21.783 TEST_HEADER include/spdk/cpuset.h 00:03:21.783 TEST_HEADER include/spdk/crc16.h 00:03:21.783 TEST_HEADER include/spdk/dif.h 00:03:21.783 TEST_HEADER include/spdk/dma.h 00:03:21.783 TEST_HEADER include/spdk/crc64.h 00:03:21.783 TEST_HEADER include/spdk/crc32.h 00:03:21.783 TEST_HEADER include/spdk/endian.h 00:03:21.783 TEST_HEADER include/spdk/env_dpdk.h 00:03:21.783 TEST_HEADER include/spdk/env.h 00:03:21.783 TEST_HEADER include/spdk/fd_group.h 00:03:21.783 TEST_HEADER include/spdk/event.h 00:03:21.783 TEST_HEADER include/spdk/file.h 00:03:21.783 TEST_HEADER include/spdk/fd.h 00:03:21.783 TEST_HEADER include/spdk/fsdev.h 00:03:21.783 TEST_HEADER include/spdk/fsdev_module.h 00:03:21.783 TEST_HEADER include/spdk/ftl.h 00:03:21.783 TEST_HEADER include/spdk/gpt_spec.h 00:03:21.783 TEST_HEADER include/spdk/hexlify.h 00:03:21.783 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:21.783 TEST_HEADER include/spdk/histogram_data.h 00:03:21.783 TEST_HEADER include/spdk/idxd.h 00:03:21.783 TEST_HEADER include/spdk/idxd_spec.h 00:03:21.783 TEST_HEADER include/spdk/init.h 00:03:21.783 TEST_HEADER include/spdk/ioat_spec.h 00:03:21.783 CC app/iscsi_tgt/iscsi_tgt.o 00:03:21.783 TEST_HEADER include/spdk/keyring.h 00:03:21.783 TEST_HEADER include/spdk/json.h 00:03:21.783 TEST_HEADER include/spdk/iscsi_spec.h 00:03:21.783 TEST_HEADER include/spdk/ioat.h 00:03:21.783 TEST_HEADER include/spdk/jsonrpc.h 00:03:21.783 TEST_HEADER include/spdk/keyring_module.h 00:03:21.783 TEST_HEADER include/spdk/likely.h 00:03:21.783 TEST_HEADER include/spdk/log.h 00:03:21.783 TEST_HEADER include/spdk/lvol.h 00:03:21.783 TEST_HEADER include/spdk/md5.h 00:03:21.783 TEST_HEADER include/spdk/memory.h 00:03:21.783 TEST_HEADER include/spdk/mmio.h 00:03:21.783 TEST_HEADER include/spdk/nbd.h 00:03:21.783 TEST_HEADER include/spdk/net.h 00:03:21.783 CC app/spdk_dd/spdk_dd.o 00:03:21.783 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:21.783 TEST_HEADER include/spdk/notify.h 00:03:21.783 TEST_HEADER include/spdk/nvme.h 00:03:21.783 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:21.783 TEST_HEADER include/spdk/nvme_intel.h 00:03:21.783 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:21.783 TEST_HEADER include/spdk/nvme_zns.h 00:03:21.783 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:21.783 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:21.783 TEST_HEADER include/spdk/nvmf.h 00:03:21.783 TEST_HEADER include/spdk/nvme_spec.h 00:03:21.783 CC app/nvmf_tgt/nvmf_main.o 00:03:21.783 TEST_HEADER include/spdk/nvmf_spec.h 00:03:21.783 TEST_HEADER include/spdk/opal.h 00:03:21.783 TEST_HEADER include/spdk/opal_spec.h 00:03:21.783 TEST_HEADER include/spdk/pipe.h 00:03:21.783 TEST_HEADER include/spdk/nvmf_transport.h 00:03:21.783 TEST_HEADER include/spdk/queue.h 00:03:21.783 TEST_HEADER include/spdk/pci_ids.h 00:03:22.050 TEST_HEADER include/spdk/rpc.h 00:03:22.050 TEST_HEADER include/spdk/scheduler.h 00:03:22.050 TEST_HEADER include/spdk/reduce.h 00:03:22.050 TEST_HEADER include/spdk/scsi.h 00:03:22.050 TEST_HEADER include/spdk/sock.h 00:03:22.050 TEST_HEADER include/spdk/scsi_spec.h 00:03:22.050 TEST_HEADER include/spdk/stdinc.h 00:03:22.050 TEST_HEADER include/spdk/string.h 00:03:22.050 TEST_HEADER include/spdk/trace.h 00:03:22.050 TEST_HEADER include/spdk/trace_parser.h 00:03:22.050 TEST_HEADER include/spdk/thread.h 00:03:22.050 TEST_HEADER include/spdk/ublk.h 00:03:22.050 TEST_HEADER include/spdk/uuid.h 00:03:22.050 TEST_HEADER include/spdk/tree.h 00:03:22.050 TEST_HEADER include/spdk/util.h 00:03:22.050 TEST_HEADER include/spdk/version.h 00:03:22.050 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:22.050 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:22.050 TEST_HEADER include/spdk/vhost.h 00:03:22.050 TEST_HEADER include/spdk/vmd.h 00:03:22.050 TEST_HEADER include/spdk/zipf.h 00:03:22.050 TEST_HEADER include/spdk/xor.h 00:03:22.050 CXX test/cpp_headers/accel_module.o 00:03:22.050 CXX test/cpp_headers/accel.o 00:03:22.050 CXX test/cpp_headers/assert.o 00:03:22.050 CXX test/cpp_headers/base64.o 00:03:22.050 CXX test/cpp_headers/barrier.o 00:03:22.050 CXX test/cpp_headers/bdev.o 00:03:22.050 CXX test/cpp_headers/bdev_module.o 00:03:22.050 CXX test/cpp_headers/bit_array.o 00:03:22.050 CXX test/cpp_headers/bdev_zone.o 00:03:22.050 CC app/spdk_tgt/spdk_tgt.o 00:03:22.050 CXX test/cpp_headers/bit_pool.o 00:03:22.050 CXX test/cpp_headers/blob_bdev.o 00:03:22.050 CXX test/cpp_headers/blobfs_bdev.o 00:03:22.050 CXX test/cpp_headers/blobfs.o 00:03:22.050 CXX test/cpp_headers/blob.o 00:03:22.050 CXX test/cpp_headers/config.o 00:03:22.050 CXX test/cpp_headers/conf.o 00:03:22.050 CXX test/cpp_headers/cpuset.o 00:03:22.050 CXX test/cpp_headers/crc16.o 00:03:22.050 CXX test/cpp_headers/crc32.o 00:03:22.050 CXX test/cpp_headers/crc64.o 00:03:22.050 CXX test/cpp_headers/dif.o 00:03:22.050 CXX test/cpp_headers/dma.o 00:03:22.050 CXX test/cpp_headers/env_dpdk.o 00:03:22.050 CXX test/cpp_headers/endian.o 00:03:22.050 CXX test/cpp_headers/event.o 00:03:22.050 CXX test/cpp_headers/env.o 00:03:22.050 CXX test/cpp_headers/fd_group.o 00:03:22.050 CXX test/cpp_headers/fd.o 00:03:22.050 CXX test/cpp_headers/fsdev.o 00:03:22.051 CXX test/cpp_headers/file.o 00:03:22.051 CXX test/cpp_headers/fsdev_module.o 00:03:22.051 CXX test/cpp_headers/fuse_dispatcher.o 00:03:22.051 CXX test/cpp_headers/ftl.o 00:03:22.051 CXX test/cpp_headers/gpt_spec.o 00:03:22.051 CXX test/cpp_headers/histogram_data.o 00:03:22.051 CXX test/cpp_headers/hexlify.o 00:03:22.051 CXX test/cpp_headers/idxd_spec.o 00:03:22.051 CXX test/cpp_headers/init.o 00:03:22.051 CXX test/cpp_headers/idxd.o 00:03:22.051 CXX test/cpp_headers/ioat.o 00:03:22.051 CXX test/cpp_headers/ioat_spec.o 00:03:22.051 CXX test/cpp_headers/iscsi_spec.o 00:03:22.051 CXX test/cpp_headers/json.o 00:03:22.051 CXX test/cpp_headers/jsonrpc.o 00:03:22.051 CXX test/cpp_headers/keyring.o 00:03:22.051 CXX test/cpp_headers/keyring_module.o 00:03:22.051 CXX test/cpp_headers/log.o 00:03:22.051 CXX test/cpp_headers/lvol.o 00:03:22.051 CXX test/cpp_headers/likely.o 00:03:22.051 CXX test/cpp_headers/memory.o 00:03:22.051 CXX test/cpp_headers/md5.o 00:03:22.051 CXX test/cpp_headers/mmio.o 00:03:22.051 CXX test/cpp_headers/nbd.o 00:03:22.051 CXX test/cpp_headers/net.o 00:03:22.051 CXX test/cpp_headers/notify.o 00:03:22.051 CC examples/ioat/perf/perf.o 00:03:22.051 CXX test/cpp_headers/nvme.o 00:03:22.051 CXX test/cpp_headers/nvme_ocssd.o 00:03:22.051 CXX test/cpp_headers/nvme_intel.o 00:03:22.051 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:22.051 CXX test/cpp_headers/nvme_spec.o 00:03:22.051 CXX test/cpp_headers/nvme_zns.o 00:03:22.051 CXX test/cpp_headers/nvmf_cmd.o 00:03:22.051 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:22.051 CXX test/cpp_headers/nvmf_spec.o 00:03:22.051 CXX test/cpp_headers/nvmf.o 00:03:22.051 CXX test/cpp_headers/opal.o 00:03:22.051 CXX test/cpp_headers/nvmf_transport.o 00:03:22.051 CC examples/ioat/verify/verify.o 00:03:22.051 CC examples/util/zipf/zipf.o 00:03:22.051 CC test/thread/poller_perf/poller_perf.o 00:03:22.051 CC app/fio/nvme/fio_plugin.o 00:03:22.051 CXX test/cpp_headers/opal_spec.o 00:03:22.051 CC test/env/pci/pci_ut.o 00:03:22.051 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:22.051 CC test/env/memory/memory_ut.o 00:03:22.051 CC test/app/jsoncat/jsoncat.o 00:03:22.051 CC test/app/histogram_perf/histogram_perf.o 00:03:22.051 CC test/env/vtophys/vtophys.o 00:03:22.051 CC test/app/stub/stub.o 00:03:22.051 CC test/app/bdev_svc/bdev_svc.o 00:03:22.051 CC test/dma/test_dma/test_dma.o 00:03:22.051 CC app/fio/bdev/fio_plugin.o 00:03:22.316 LINK spdk_lspci 00:03:22.316 LINK nvmf_tgt 00:03:22.316 LINK iscsi_tgt 00:03:22.578 LINK rpc_client_test 00:03:22.578 CC test/env/mem_callbacks/mem_callbacks.o 00:03:22.578 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:22.578 LINK spdk_trace_record 00:03:22.578 LINK spdk_nvme_discover 00:03:22.578 LINK interrupt_tgt 00:03:22.578 LINK jsoncat 00:03:22.578 CXX test/cpp_headers/pci_ids.o 00:03:22.578 CXX test/cpp_headers/pipe.o 00:03:22.578 LINK poller_perf 00:03:22.578 CXX test/cpp_headers/reduce.o 00:03:22.578 CXX test/cpp_headers/queue.o 00:03:22.578 CXX test/cpp_headers/rpc.o 00:03:22.578 CXX test/cpp_headers/scheduler.o 00:03:22.578 CXX test/cpp_headers/scsi.o 00:03:22.578 LINK vtophys 00:03:22.578 CXX test/cpp_headers/scsi_spec.o 00:03:22.578 CXX test/cpp_headers/sock.o 00:03:22.578 CXX test/cpp_headers/stdinc.o 00:03:22.578 CXX test/cpp_headers/string.o 00:03:22.578 LINK verify 00:03:22.578 CXX test/cpp_headers/thread.o 00:03:22.578 CXX test/cpp_headers/trace.o 00:03:22.578 CXX test/cpp_headers/trace_parser.o 00:03:22.578 CXX test/cpp_headers/tree.o 00:03:22.578 CXX test/cpp_headers/ublk.o 00:03:22.578 CXX test/cpp_headers/util.o 00:03:22.578 CXX test/cpp_headers/uuid.o 00:03:22.578 LINK zipf 00:03:22.578 CXX test/cpp_headers/version.o 00:03:22.578 CXX test/cpp_headers/vfio_user_pci.o 00:03:22.578 CXX test/cpp_headers/vfio_user_spec.o 00:03:22.578 CXX test/cpp_headers/vhost.o 00:03:22.578 CXX test/cpp_headers/vmd.o 00:03:22.578 LINK histogram_perf 00:03:22.578 CXX test/cpp_headers/xor.o 00:03:22.578 CXX test/cpp_headers/zipf.o 00:03:22.578 LINK bdev_svc 00:03:22.836 LINK env_dpdk_post_init 00:03:22.836 LINK spdk_tgt 00:03:22.836 LINK ioat_perf 00:03:22.836 LINK stub 00:03:22.836 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:22.836 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:22.836 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:22.836 LINK spdk_dd 00:03:22.836 LINK spdk_trace 00:03:22.836 LINK pci_ut 00:03:22.836 LINK spdk_nvme 00:03:23.093 LINK nvme_fuzz 00:03:23.093 LINK test_dma 00:03:23.093 CC examples/vmd/led/led.o 00:03:23.093 CC examples/idxd/perf/perf.o 00:03:23.093 CC examples/vmd/lsvmd/lsvmd.o 00:03:23.093 LINK spdk_nvme_perf 00:03:23.093 CC test/event/reactor/reactor.o 00:03:23.093 CC test/event/reactor_perf/reactor_perf.o 00:03:23.093 CC examples/sock/hello_world/hello_sock.o 00:03:23.093 LINK spdk_bdev 00:03:23.093 LINK spdk_nvme_identify 00:03:23.093 CC test/event/event_perf/event_perf.o 00:03:23.093 LINK vhost_fuzz 00:03:23.093 CC test/event/app_repeat/app_repeat.o 00:03:23.093 CC test/event/scheduler/scheduler.o 00:03:23.093 CC examples/thread/thread/thread_ex.o 00:03:23.093 LINK mem_callbacks 00:03:23.351 LINK spdk_top 00:03:23.351 CC app/vhost/vhost.o 00:03:23.351 LINK led 00:03:23.351 LINK reactor 00:03:23.351 LINK lsvmd 00:03:23.351 LINK reactor_perf 00:03:23.351 LINK event_perf 00:03:23.351 LINK app_repeat 00:03:23.351 LINK hello_sock 00:03:23.351 LINK idxd_perf 00:03:23.351 LINK scheduler 00:03:23.351 LINK thread 00:03:23.351 LINK vhost 00:03:23.608 LINK memory_ut 00:03:23.608 CC test/nvme/sgl/sgl.o 00:03:23.608 CC test/nvme/cuse/cuse.o 00:03:23.608 CC test/nvme/overhead/overhead.o 00:03:23.608 CC test/nvme/connect_stress/connect_stress.o 00:03:23.608 CC test/nvme/aer/aer.o 00:03:23.608 CC test/nvme/err_injection/err_injection.o 00:03:23.608 CC test/nvme/e2edp/nvme_dp.o 00:03:23.608 CC test/nvme/reset/reset.o 00:03:23.608 CC test/nvme/compliance/nvme_compliance.o 00:03:23.608 CC test/nvme/fdp/fdp.o 00:03:23.608 CC test/nvme/boot_partition/boot_partition.o 00:03:23.608 CC test/nvme/reserve/reserve.o 00:03:23.608 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:23.608 CC test/nvme/fused_ordering/fused_ordering.o 00:03:23.608 CC test/nvme/simple_copy/simple_copy.o 00:03:23.608 CC test/nvme/startup/startup.o 00:03:23.608 CC test/accel/dif/dif.o 00:03:23.608 CC test/blobfs/mkfs/mkfs.o 00:03:23.608 CC test/lvol/esnap/esnap.o 00:03:23.608 LINK err_injection 00:03:23.608 LINK boot_partition 00:03:23.608 LINK connect_stress 00:03:23.608 LINK reserve 00:03:23.866 LINK doorbell_aers 00:03:23.866 LINK fused_ordering 00:03:23.866 LINK startup 00:03:23.866 LINK sgl 00:03:23.866 LINK nvme_dp 00:03:23.866 LINK reset 00:03:23.866 LINK aer 00:03:23.866 LINK simple_copy 00:03:23.866 LINK mkfs 00:03:23.866 LINK overhead 00:03:23.866 CC examples/nvme/abort/abort.o 00:03:23.866 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:23.866 CC examples/nvme/hotplug/hotplug.o 00:03:23.866 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:23.866 CC examples/nvme/hello_world/hello_world.o 00:03:23.866 CC examples/nvme/reconnect/reconnect.o 00:03:23.866 LINK nvme_compliance 00:03:23.866 CC examples/nvme/arbitration/arbitration.o 00:03:23.866 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:23.866 LINK fdp 00:03:23.866 CC examples/accel/perf/accel_perf.o 00:03:23.866 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:23.866 CC examples/blob/cli/blobcli.o 00:03:23.866 CC examples/blob/hello_world/hello_blob.o 00:03:23.866 LINK cmb_copy 00:03:24.124 LINK pmr_persistence 00:03:24.124 LINK hotplug 00:03:24.124 LINK hello_world 00:03:24.124 LINK arbitration 00:03:24.124 LINK abort 00:03:24.124 LINK reconnect 00:03:24.124 LINK dif 00:03:24.124 LINK hello_blob 00:03:24.124 LINK hello_fsdev 00:03:24.124 LINK iscsi_fuzz 00:03:24.124 LINK nvme_manage 00:03:24.382 LINK accel_perf 00:03:24.382 LINK blobcli 00:03:24.640 LINK cuse 00:03:24.640 CC test/bdev/bdevio/bdevio.o 00:03:24.640 CC examples/bdev/hello_world/hello_bdev.o 00:03:24.640 CC examples/bdev/bdevperf/bdevperf.o 00:03:24.898 LINK bdevio 00:03:24.899 LINK hello_bdev 00:03:25.465 LINK bdevperf 00:03:25.723 CC examples/nvmf/nvmf/nvmf.o 00:03:25.981 LINK nvmf 00:03:27.358 LINK esnap 00:03:27.617 00:03:27.617 real 0m54.972s 00:03:27.617 user 7m58.285s 00:03:27.617 sys 3m35.686s 00:03:27.617 07:46:21 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:27.617 07:46:21 make -- common/autotest_common.sh@10 -- $ set +x 00:03:27.617 ************************************ 00:03:27.617 END TEST make 00:03:27.617 ************************************ 00:03:27.617 07:46:21 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:27.617 07:46:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:27.617 07:46:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:27.617 07:46:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.617 07:46:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:27.617 07:46:21 -- pm/common@44 -- $ pid=2176992 00:03:27.617 07:46:21 -- pm/common@50 -- $ kill -TERM 2176992 00:03:27.617 07:46:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.617 07:46:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:27.617 07:46:21 -- pm/common@44 -- $ pid=2176994 00:03:27.617 07:46:21 -- pm/common@50 -- $ kill -TERM 2176994 00:03:27.617 07:46:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.617 07:46:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:27.617 07:46:21 -- pm/common@44 -- $ pid=2176999 00:03:27.617 07:46:21 -- pm/common@50 -- $ kill -TERM 2176999 00:03:27.617 07:46:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.617 07:46:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:27.617 07:46:21 -- pm/common@44 -- $ pid=2177028 00:03:27.617 07:46:21 -- pm/common@50 -- $ sudo -E kill -TERM 2177028 00:03:27.617 07:46:21 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:27.617 07:46:21 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:27.617 07:46:21 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:27.617 07:46:21 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:27.617 07:46:21 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:27.617 07:46:21 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:27.617 07:46:21 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:27.617 07:46:21 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:27.617 07:46:21 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:27.617 07:46:21 -- scripts/common.sh@336 -- # IFS=.-: 00:03:27.617 07:46:21 -- scripts/common.sh@336 -- # read -ra ver1 00:03:27.617 07:46:21 -- scripts/common.sh@337 -- # IFS=.-: 00:03:27.617 07:46:21 -- scripts/common.sh@337 -- # read -ra ver2 00:03:27.617 07:46:21 -- scripts/common.sh@338 -- # local 'op=<' 00:03:27.617 07:46:21 -- scripts/common.sh@340 -- # ver1_l=2 00:03:27.617 07:46:21 -- scripts/common.sh@341 -- # ver2_l=1 00:03:27.617 07:46:21 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:27.617 07:46:21 -- scripts/common.sh@344 -- # case "$op" in 00:03:27.617 07:46:21 -- scripts/common.sh@345 -- # : 1 00:03:27.617 07:46:21 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:27.617 07:46:21 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:27.617 07:46:21 -- scripts/common.sh@365 -- # decimal 1 00:03:27.617 07:46:21 -- scripts/common.sh@353 -- # local d=1 00:03:27.617 07:46:21 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:27.617 07:46:21 -- scripts/common.sh@355 -- # echo 1 00:03:27.617 07:46:21 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:27.617 07:46:21 -- scripts/common.sh@366 -- # decimal 2 00:03:27.617 07:46:21 -- scripts/common.sh@353 -- # local d=2 00:03:27.617 07:46:21 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:27.617 07:46:21 -- scripts/common.sh@355 -- # echo 2 00:03:27.617 07:46:21 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:27.617 07:46:21 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:27.617 07:46:21 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:27.617 07:46:21 -- scripts/common.sh@368 -- # return 0 00:03:27.617 07:46:21 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:27.617 07:46:21 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:27.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.617 --rc genhtml_branch_coverage=1 00:03:27.617 --rc genhtml_function_coverage=1 00:03:27.617 --rc genhtml_legend=1 00:03:27.617 --rc geninfo_all_blocks=1 00:03:27.617 --rc geninfo_unexecuted_blocks=1 00:03:27.617 00:03:27.617 ' 00:03:27.617 07:46:21 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:27.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.617 --rc genhtml_branch_coverage=1 00:03:27.617 --rc genhtml_function_coverage=1 00:03:27.617 --rc genhtml_legend=1 00:03:27.617 --rc geninfo_all_blocks=1 00:03:27.617 --rc geninfo_unexecuted_blocks=1 00:03:27.617 00:03:27.617 ' 00:03:27.617 07:46:21 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:27.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.617 --rc genhtml_branch_coverage=1 00:03:27.617 --rc genhtml_function_coverage=1 00:03:27.617 --rc genhtml_legend=1 00:03:27.617 --rc geninfo_all_blocks=1 00:03:27.618 --rc geninfo_unexecuted_blocks=1 00:03:27.618 00:03:27.618 ' 00:03:27.618 07:46:21 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:27.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.618 --rc genhtml_branch_coverage=1 00:03:27.618 --rc genhtml_function_coverage=1 00:03:27.618 --rc genhtml_legend=1 00:03:27.618 --rc geninfo_all_blocks=1 00:03:27.618 --rc geninfo_unexecuted_blocks=1 00:03:27.618 00:03:27.618 ' 00:03:27.618 07:46:21 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:27.618 07:46:21 -- nvmf/common.sh@7 -- # uname -s 00:03:27.618 07:46:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:27.618 07:46:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:27.618 07:46:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:27.618 07:46:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:27.618 07:46:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:27.618 07:46:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:27.877 07:46:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:27.877 07:46:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:27.877 07:46:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:27.877 07:46:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:27.877 07:46:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:27.877 07:46:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:27.877 07:46:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:27.877 07:46:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:27.877 07:46:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:27.877 07:46:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:27.877 07:46:21 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:27.877 07:46:21 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:27.877 07:46:21 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:27.877 07:46:21 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:27.877 07:46:21 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:27.877 07:46:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.877 07:46:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.877 07:46:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.877 07:46:21 -- paths/export.sh@5 -- # export PATH 00:03:27.877 07:46:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.877 07:46:21 -- nvmf/common.sh@51 -- # : 0 00:03:27.877 07:46:21 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:27.877 07:46:21 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:27.877 07:46:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:27.877 07:46:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:27.877 07:46:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:27.877 07:46:21 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:27.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:27.877 07:46:21 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:27.877 07:46:21 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:27.877 07:46:21 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:27.877 07:46:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:27.877 07:46:21 -- spdk/autotest.sh@32 -- # uname -s 00:03:27.877 07:46:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:27.877 07:46:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:27.877 07:46:21 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:27.877 07:46:21 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:27.877 07:46:21 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:27.877 07:46:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:27.877 07:46:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:27.877 07:46:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:27.877 07:46:21 -- spdk/autotest.sh@48 -- # udevadm_pid=2239623 00:03:27.877 07:46:21 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:27.877 07:46:21 -- pm/common@17 -- # local monitor 00:03:27.877 07:46:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.877 07:46:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:27.877 07:46:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.877 07:46:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.877 07:46:21 -- pm/common@21 -- # date +%s 00:03:27.877 07:46:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.877 07:46:21 -- pm/common@21 -- # date +%s 00:03:27.877 07:46:21 -- pm/common@25 -- # sleep 1 00:03:27.877 07:46:21 -- pm/common@21 -- # date +%s 00:03:27.877 07:46:21 -- pm/common@21 -- # date +%s 00:03:27.877 07:46:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732689981 00:03:27.877 07:46:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732689981 00:03:27.877 07:46:21 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732689981 00:03:27.877 07:46:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732689981 00:03:27.878 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732689981_collect-vmstat.pm.log 00:03:27.878 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732689981_collect-cpu-load.pm.log 00:03:27.878 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732689981_collect-cpu-temp.pm.log 00:03:27.878 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732689981_collect-bmc-pm.bmc.pm.log 00:03:28.815 07:46:22 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:28.815 07:46:22 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:28.815 07:46:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:28.815 07:46:22 -- common/autotest_common.sh@10 -- # set +x 00:03:28.815 07:46:22 -- spdk/autotest.sh@59 -- # create_test_list 00:03:28.815 07:46:22 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:28.815 07:46:22 -- common/autotest_common.sh@10 -- # set +x 00:03:28.815 07:46:22 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:28.815 07:46:22 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:28.815 07:46:22 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:28.815 07:46:22 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:28.815 07:46:22 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:28.815 07:46:22 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:28.815 07:46:22 -- common/autotest_common.sh@1457 -- # uname 00:03:28.815 07:46:22 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:28.815 07:46:22 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:28.815 07:46:22 -- common/autotest_common.sh@1477 -- # uname 00:03:28.815 07:46:22 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:28.815 07:46:22 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:28.815 07:46:22 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:28.815 lcov: LCOV version 1.15 00:03:28.815 07:46:22 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:41.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:41.023 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:53.229 07:46:45 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:53.229 07:46:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.229 07:46:45 -- common/autotest_common.sh@10 -- # set +x 00:03:53.229 07:46:45 -- spdk/autotest.sh@78 -- # rm -f 00:03:53.229 07:46:45 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.608 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:54.608 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:54.608 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:54.608 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:54.608 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:54.608 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:54.608 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:54.608 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:54.608 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:54.608 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:54.608 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:54.608 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:54.608 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:54.608 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:54.608 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:54.608 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:54.608 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:54.867 07:46:48 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:54.867 07:46:48 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:54.867 07:46:48 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:54.867 07:46:48 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:54.867 07:46:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:54.867 07:46:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:54.867 07:46:48 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:54.867 07:46:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:54.867 07:46:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:54.867 07:46:48 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:54.867 07:46:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.867 07:46:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:54.867 07:46:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:54.867 07:46:48 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:54.867 07:46:48 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:54.867 No valid GPT data, bailing 00:03:54.867 07:46:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:54.867 07:46:48 -- scripts/common.sh@394 -- # pt= 00:03:54.867 07:46:48 -- scripts/common.sh@395 -- # return 1 00:03:54.867 07:46:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:54.867 1+0 records in 00:03:54.867 1+0 records out 00:03:54.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00152836 s, 686 MB/s 00:03:54.867 07:46:48 -- spdk/autotest.sh@105 -- # sync 00:03:54.867 07:46:48 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:54.867 07:46:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:54.867 07:46:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:59.059 07:46:53 -- spdk/autotest.sh@111 -- # uname -s 00:03:59.059 07:46:53 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:59.059 07:46:53 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:59.059 07:46:53 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:01.701 Hugepages 00:04:01.701 node hugesize free / total 00:04:01.701 node0 1048576kB 0 / 0 00:04:01.701 node0 2048kB 0 / 0 00:04:01.701 node1 1048576kB 0 / 0 00:04:01.701 node1 2048kB 0 / 0 00:04:01.701 00:04:01.701 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:01.701 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:01.701 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:01.701 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:01.701 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:01.701 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:01.701 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:01.701 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:01.701 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:01.701 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:01.701 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:01.701 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:01.701 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:01.701 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:01.701 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:01.701 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:01.701 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:01.701 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:01.701 07:46:55 -- spdk/autotest.sh@117 -- # uname -s 00:04:01.701 07:46:55 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:01.701 07:46:55 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:01.701 07:46:55 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.238 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:04.238 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:04.238 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:04.238 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:04.238 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:04.238 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:04.238 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:04.238 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:04.238 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:04.238 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:04.238 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:04.238 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:04.238 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:04.238 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:04.238 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:04.238 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:04.807 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:04.807 07:46:58 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:06.185 07:46:59 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:06.185 07:46:59 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:06.185 07:46:59 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:06.185 07:46:59 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:06.185 07:46:59 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:06.185 07:46:59 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:06.185 07:46:59 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.185 07:46:59 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:06.185 07:46:59 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:06.185 07:46:59 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:06.185 07:46:59 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:06.185 07:46:59 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:08.718 Waiting for block devices as requested 00:04:08.718 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:08.718 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:08.718 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:08.718 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:08.977 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:08.977 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:08.977 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:08.977 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:09.236 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:09.236 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:09.236 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:09.236 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:09.495 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:09.495 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:09.495 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:09.495 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:09.755 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:09.755 07:47:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:09.755 07:47:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:09.755 07:47:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:09.755 07:47:03 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:09.755 07:47:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:09.755 07:47:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:09.755 07:47:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:09.755 07:47:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:09.755 07:47:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:09.755 07:47:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:09.755 07:47:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:09.755 07:47:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:09.755 07:47:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:09.755 07:47:03 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:09.755 07:47:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:09.755 07:47:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:09.755 07:47:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:09.755 07:47:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:09.755 07:47:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:09.755 07:47:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:09.755 07:47:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:09.755 07:47:03 -- common/autotest_common.sh@1543 -- # continue 00:04:09.755 07:47:03 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:09.755 07:47:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:09.755 07:47:03 -- common/autotest_common.sh@10 -- # set +x 00:04:09.755 07:47:03 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:09.755 07:47:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.755 07:47:03 -- common/autotest_common.sh@10 -- # set +x 00:04:09.755 07:47:03 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.045 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:13.045 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:13.045 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.045 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.045 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.045 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.045 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.045 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:13.045 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:13.045 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:13.045 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.045 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.045 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.045 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.045 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.045 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:13.613 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:13.613 07:47:07 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:13.613 07:47:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:13.613 07:47:07 -- common/autotest_common.sh@10 -- # set +x 00:04:13.613 07:47:07 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:13.613 07:47:07 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:13.613 07:47:07 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:13.613 07:47:07 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:13.613 07:47:07 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:13.613 07:47:07 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:13.613 07:47:07 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:13.613 07:47:07 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:13.613 07:47:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:13.613 07:47:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:13.613 07:47:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:13.613 07:47:07 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:13.613 07:47:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:13.872 07:47:07 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:13.872 07:47:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:13.872 07:47:07 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:13.872 07:47:07 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:13.872 07:47:07 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:13.872 07:47:07 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:13.872 07:47:07 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:13.872 07:47:07 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:13.872 07:47:07 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:13.872 07:47:07 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:13.872 07:47:07 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2253115 00:04:13.872 07:47:07 -- common/autotest_common.sh@1585 -- # waitforlisten 2253115 00:04:13.872 07:47:07 -- common/autotest_common.sh@835 -- # '[' -z 2253115 ']' 00:04:13.872 07:47:07 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.872 07:47:07 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.872 07:47:07 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.872 07:47:07 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.872 07:47:07 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.872 07:47:07 -- common/autotest_common.sh@10 -- # set +x 00:04:13.872 [2024-11-27 07:47:07.850807] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:04:13.872 [2024-11-27 07:47:07.850855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253115 ] 00:04:13.872 [2024-11-27 07:47:07.913133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.872 [2024-11-27 07:47:07.955495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.131 07:47:08 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.131 07:47:08 -- common/autotest_common.sh@868 -- # return 0 00:04:14.131 07:47:08 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:14.131 07:47:08 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:14.131 07:47:08 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:17.421 nvme0n1 00:04:17.421 07:47:11 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:17.421 [2024-11-27 07:47:11.362461] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:17.421 request: 00:04:17.421 { 00:04:17.421 "nvme_ctrlr_name": "nvme0", 00:04:17.421 "password": "test", 00:04:17.421 "method": "bdev_nvme_opal_revert", 00:04:17.421 "req_id": 1 00:04:17.421 } 00:04:17.421 Got JSON-RPC error response 00:04:17.421 response: 00:04:17.421 { 00:04:17.421 "code": -32602, 00:04:17.421 "message": "Invalid parameters" 00:04:17.421 } 00:04:17.421 07:47:11 -- common/autotest_common.sh@1591 -- # true 00:04:17.421 07:47:11 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:17.421 07:47:11 -- common/autotest_common.sh@1595 -- # killprocess 2253115 00:04:17.421 07:47:11 -- common/autotest_common.sh@954 -- # '[' -z 2253115 ']' 00:04:17.421 07:47:11 -- common/autotest_common.sh@958 -- # kill -0 2253115 00:04:17.421 07:47:11 -- common/autotest_common.sh@959 -- # uname 00:04:17.421 07:47:11 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:17.421 07:47:11 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2253115 00:04:17.421 07:47:11 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.421 07:47:11 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.421 07:47:11 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2253115' 00:04:17.421 killing process with pid 2253115 00:04:17.421 07:47:11 -- common/autotest_common.sh@973 -- # kill 2253115 00:04:17.421 07:47:11 -- common/autotest_common.sh@978 -- # wait 2253115 00:04:19.330 07:47:12 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:19.330 07:47:12 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:19.330 07:47:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:19.330 07:47:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:19.330 07:47:12 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:19.330 07:47:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.330 07:47:12 -- common/autotest_common.sh@10 -- # set +x 00:04:19.330 07:47:13 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:19.330 07:47:13 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:19.330 07:47:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.330 07:47:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.330 07:47:13 -- common/autotest_common.sh@10 -- # set +x 00:04:19.330 ************************************ 00:04:19.330 START TEST env 00:04:19.330 ************************************ 00:04:19.330 07:47:13 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:19.330 * Looking for test storage... 00:04:19.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:19.330 07:47:13 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.330 07:47:13 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:19.330 07:47:13 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.330 07:47:13 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.330 07:47:13 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.330 07:47:13 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.330 07:47:13 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.330 07:47:13 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.330 07:47:13 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.330 07:47:13 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.330 07:47:13 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.330 07:47:13 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.330 07:47:13 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.330 07:47:13 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.330 07:47:13 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.330 07:47:13 env -- scripts/common.sh@344 -- # case "$op" in 00:04:19.330 07:47:13 env -- scripts/common.sh@345 -- # : 1 00:04:19.330 07:47:13 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.330 07:47:13 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.330 07:47:13 env -- scripts/common.sh@365 -- # decimal 1 00:04:19.330 07:47:13 env -- scripts/common.sh@353 -- # local d=1 00:04:19.330 07:47:13 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.330 07:47:13 env -- scripts/common.sh@355 -- # echo 1 00:04:19.330 07:47:13 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.330 07:47:13 env -- scripts/common.sh@366 -- # decimal 2 00:04:19.330 07:47:13 env -- scripts/common.sh@353 -- # local d=2 00:04:19.330 07:47:13 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.330 07:47:13 env -- scripts/common.sh@355 -- # echo 2 00:04:19.330 07:47:13 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.330 07:47:13 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.330 07:47:13 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.330 07:47:13 env -- scripts/common.sh@368 -- # return 0 00:04:19.330 07:47:13 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.330 07:47:13 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.330 --rc genhtml_branch_coverage=1 00:04:19.330 --rc genhtml_function_coverage=1 00:04:19.330 --rc genhtml_legend=1 00:04:19.330 --rc geninfo_all_blocks=1 00:04:19.330 --rc geninfo_unexecuted_blocks=1 00:04:19.330 00:04:19.330 ' 00:04:19.330 07:47:13 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.330 --rc genhtml_branch_coverage=1 00:04:19.330 --rc genhtml_function_coverage=1 00:04:19.330 --rc genhtml_legend=1 00:04:19.330 --rc geninfo_all_blocks=1 00:04:19.330 --rc geninfo_unexecuted_blocks=1 00:04:19.330 00:04:19.330 ' 00:04:19.330 07:47:13 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.330 --rc genhtml_branch_coverage=1 00:04:19.330 --rc genhtml_function_coverage=1 00:04:19.330 --rc genhtml_legend=1 00:04:19.330 --rc geninfo_all_blocks=1 00:04:19.330 --rc geninfo_unexecuted_blocks=1 00:04:19.330 00:04:19.330 ' 00:04:19.330 07:47:13 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.330 --rc genhtml_branch_coverage=1 00:04:19.330 --rc genhtml_function_coverage=1 00:04:19.330 --rc genhtml_legend=1 00:04:19.330 --rc geninfo_all_blocks=1 00:04:19.330 --rc geninfo_unexecuted_blocks=1 00:04:19.330 00:04:19.330 ' 00:04:19.330 07:47:13 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:19.330 07:47:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.330 07:47:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.330 07:47:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.330 ************************************ 00:04:19.330 START TEST env_memory 00:04:19.330 ************************************ 00:04:19.330 07:47:13 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:19.330 00:04:19.330 00:04:19.330 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.330 http://cunit.sourceforge.net/ 00:04:19.330 00:04:19.330 00:04:19.330 Suite: memory 00:04:19.330 Test: alloc and free memory map ...[2024-11-27 07:47:13.281560] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:19.330 passed 00:04:19.330 Test: mem map translation ...[2024-11-27 07:47:13.302321] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:19.330 [2024-11-27 07:47:13.302336] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:19.330 [2024-11-27 07:47:13.302371] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:19.330 [2024-11-27 07:47:13.302378] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:19.330 passed 00:04:19.330 Test: mem map registration ...[2024-11-27 07:47:13.342453] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:19.330 [2024-11-27 07:47:13.342468] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:19.330 passed 00:04:19.330 Test: mem map adjacent registrations ...passed 00:04:19.330 00:04:19.330 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.330 suites 1 1 n/a 0 0 00:04:19.330 tests 4 4 4 0 0 00:04:19.330 asserts 152 152 152 0 n/a 00:04:19.330 00:04:19.330 Elapsed time = 0.146 seconds 00:04:19.330 00:04:19.330 real 0m0.159s 00:04:19.330 user 0m0.150s 00:04:19.330 sys 0m0.008s 00:04:19.330 07:47:13 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.330 07:47:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:19.330 ************************************ 00:04:19.330 END TEST env_memory 00:04:19.330 ************************************ 00:04:19.330 07:47:13 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:19.330 07:47:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.330 07:47:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.330 07:47:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.590 ************************************ 00:04:19.590 START TEST env_vtophys 00:04:19.590 ************************************ 00:04:19.590 07:47:13 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:19.590 EAL: lib.eal log level changed from notice to debug 00:04:19.590 EAL: Detected lcore 0 as core 0 on socket 0 00:04:19.590 EAL: Detected lcore 1 as core 1 on socket 0 00:04:19.590 EAL: Detected lcore 2 as core 2 on socket 0 00:04:19.590 EAL: Detected lcore 3 as core 3 on socket 0 00:04:19.591 EAL: Detected lcore 4 as core 4 on socket 0 00:04:19.591 EAL: Detected lcore 5 as core 5 on socket 0 00:04:19.591 EAL: Detected lcore 6 as core 6 on socket 0 00:04:19.591 EAL: Detected lcore 7 as core 8 on socket 0 00:04:19.591 EAL: Detected lcore 8 as core 9 on socket 0 00:04:19.591 EAL: Detected lcore 9 as core 10 on socket 0 00:04:19.591 EAL: Detected lcore 10 as core 11 on socket 0 00:04:19.591 EAL: Detected lcore 11 as core 12 on socket 0 00:04:19.591 EAL: Detected lcore 12 as core 13 on socket 0 00:04:19.591 EAL: Detected lcore 13 as core 16 on socket 0 00:04:19.591 EAL: Detected lcore 14 as core 17 on socket 0 00:04:19.591 EAL: Detected lcore 15 as core 18 on socket 0 00:04:19.591 EAL: Detected lcore 16 as core 19 on socket 0 00:04:19.591 EAL: Detected lcore 17 as core 20 on socket 0 00:04:19.591 EAL: Detected lcore 18 as core 21 on socket 0 00:04:19.591 EAL: Detected lcore 19 as core 25 on socket 0 00:04:19.591 EAL: Detected lcore 20 as core 26 on socket 0 00:04:19.591 EAL: Detected lcore 21 as core 27 on socket 0 00:04:19.591 EAL: Detected lcore 22 as core 28 on socket 0 00:04:19.591 EAL: Detected lcore 23 as core 29 on socket 0 00:04:19.591 EAL: Detected lcore 24 as core 0 on socket 1 00:04:19.591 EAL: Detected lcore 25 as core 1 on socket 1 00:04:19.591 EAL: Detected lcore 26 as core 2 on socket 1 00:04:19.591 EAL: Detected lcore 27 as core 3 on socket 1 00:04:19.591 EAL: Detected lcore 28 as core 4 on socket 1 00:04:19.591 EAL: Detected lcore 29 as core 5 on socket 1 00:04:19.591 EAL: Detected lcore 30 as core 6 on socket 1 00:04:19.591 EAL: Detected lcore 31 as core 9 on socket 1 00:04:19.591 EAL: Detected lcore 32 as core 10 on socket 1 00:04:19.591 EAL: Detected lcore 33 as core 11 on socket 1 00:04:19.591 EAL: Detected lcore 34 as core 12 on socket 1 00:04:19.591 EAL: Detected lcore 35 as core 13 on socket 1 00:04:19.591 EAL: Detected lcore 36 as core 16 on socket 1 00:04:19.591 EAL: Detected lcore 37 as core 17 on socket 1 00:04:19.591 EAL: Detected lcore 38 as core 18 on socket 1 00:04:19.591 EAL: Detected lcore 39 as core 19 on socket 1 00:04:19.591 EAL: Detected lcore 40 as core 20 on socket 1 00:04:19.591 EAL: Detected lcore 41 as core 21 on socket 1 00:04:19.591 EAL: Detected lcore 42 as core 24 on socket 1 00:04:19.591 EAL: Detected lcore 43 as core 25 on socket 1 00:04:19.591 EAL: Detected lcore 44 as core 26 on socket 1 00:04:19.591 EAL: Detected lcore 45 as core 27 on socket 1 00:04:19.591 EAL: Detected lcore 46 as core 28 on socket 1 00:04:19.591 EAL: Detected lcore 47 as core 29 on socket 1 00:04:19.591 EAL: Detected lcore 48 as core 0 on socket 0 00:04:19.591 EAL: Detected lcore 49 as core 1 on socket 0 00:04:19.591 EAL: Detected lcore 50 as core 2 on socket 0 00:04:19.591 EAL: Detected lcore 51 as core 3 on socket 0 00:04:19.591 EAL: Detected lcore 52 as core 4 on socket 0 00:04:19.591 EAL: Detected lcore 53 as core 5 on socket 0 00:04:19.591 EAL: Detected lcore 54 as core 6 on socket 0 00:04:19.591 EAL: Detected lcore 55 as core 8 on socket 0 00:04:19.591 EAL: Detected lcore 56 as core 9 on socket 0 00:04:19.591 EAL: Detected lcore 57 as core 10 on socket 0 00:04:19.591 EAL: Detected lcore 58 as core 11 on socket 0 00:04:19.591 EAL: Detected lcore 59 as core 12 on socket 0 00:04:19.591 EAL: Detected lcore 60 as core 13 on socket 0 00:04:19.591 EAL: Detected lcore 61 as core 16 on socket 0 00:04:19.591 EAL: Detected lcore 62 as core 17 on socket 0 00:04:19.591 EAL: Detected lcore 63 as core 18 on socket 0 00:04:19.591 EAL: Detected lcore 64 as core 19 on socket 0 00:04:19.591 EAL: Detected lcore 65 as core 20 on socket 0 00:04:19.591 EAL: Detected lcore 66 as core 21 on socket 0 00:04:19.591 EAL: Detected lcore 67 as core 25 on socket 0 00:04:19.591 EAL: Detected lcore 68 as core 26 on socket 0 00:04:19.591 EAL: Detected lcore 69 as core 27 on socket 0 00:04:19.591 EAL: Detected lcore 70 as core 28 on socket 0 00:04:19.591 EAL: Detected lcore 71 as core 29 on socket 0 00:04:19.591 EAL: Detected lcore 72 as core 0 on socket 1 00:04:19.591 EAL: Detected lcore 73 as core 1 on socket 1 00:04:19.591 EAL: Detected lcore 74 as core 2 on socket 1 00:04:19.591 EAL: Detected lcore 75 as core 3 on socket 1 00:04:19.591 EAL: Detected lcore 76 as core 4 on socket 1 00:04:19.591 EAL: Detected lcore 77 as core 5 on socket 1 00:04:19.591 EAL: Detected lcore 78 as core 6 on socket 1 00:04:19.591 EAL: Detected lcore 79 as core 9 on socket 1 00:04:19.591 EAL: Detected lcore 80 as core 10 on socket 1 00:04:19.591 EAL: Detected lcore 81 as core 11 on socket 1 00:04:19.591 EAL: Detected lcore 82 as core 12 on socket 1 00:04:19.591 EAL: Detected lcore 83 as core 13 on socket 1 00:04:19.591 EAL: Detected lcore 84 as core 16 on socket 1 00:04:19.591 EAL: Detected lcore 85 as core 17 on socket 1 00:04:19.591 EAL: Detected lcore 86 as core 18 on socket 1 00:04:19.591 EAL: Detected lcore 87 as core 19 on socket 1 00:04:19.591 EAL: Detected lcore 88 as core 20 on socket 1 00:04:19.591 EAL: Detected lcore 89 as core 21 on socket 1 00:04:19.591 EAL: Detected lcore 90 as core 24 on socket 1 00:04:19.591 EAL: Detected lcore 91 as core 25 on socket 1 00:04:19.591 EAL: Detected lcore 92 as core 26 on socket 1 00:04:19.591 EAL: Detected lcore 93 as core 27 on socket 1 00:04:19.591 EAL: Detected lcore 94 as core 28 on socket 1 00:04:19.591 EAL: Detected lcore 95 as core 29 on socket 1 00:04:19.591 EAL: Maximum logical cores by configuration: 128 00:04:19.591 EAL: Detected CPU lcores: 96 00:04:19.591 EAL: Detected NUMA nodes: 2 00:04:19.591 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:19.591 EAL: Detected shared linkage of DPDK 00:04:19.591 EAL: No shared files mode enabled, IPC will be disabled 00:04:19.591 EAL: Bus pci wants IOVA as 'DC' 00:04:19.591 EAL: Buses did not request a specific IOVA mode. 00:04:19.591 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:19.591 EAL: Selected IOVA mode 'VA' 00:04:19.591 EAL: Probing VFIO support... 00:04:19.591 EAL: IOMMU type 1 (Type 1) is supported 00:04:19.591 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:19.591 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:19.591 EAL: VFIO support initialized 00:04:19.591 EAL: Ask a virtual area of 0x2e000 bytes 00:04:19.591 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:19.591 EAL: Setting up physically contiguous memory... 00:04:19.591 EAL: Setting maximum number of open files to 524288 00:04:19.591 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:19.591 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:19.591 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:19.591 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.591 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:19.591 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.591 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.591 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:19.591 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:19.591 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.591 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:19.591 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.591 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.591 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:19.591 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:19.591 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.591 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:19.591 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.591 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.591 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:19.591 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:19.591 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.591 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:19.591 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.591 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.591 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:19.591 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:19.591 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:19.591 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.591 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:19.591 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:19.591 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.591 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:19.591 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:19.591 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.591 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:19.591 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:19.591 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.591 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:19.591 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:19.591 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.591 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:19.591 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:19.591 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.591 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:19.591 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:19.591 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.591 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:19.591 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:19.591 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.591 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:19.591 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:19.591 EAL: Hugepages will be freed exactly as allocated. 00:04:19.591 EAL: No shared files mode enabled, IPC is disabled 00:04:19.591 EAL: No shared files mode enabled, IPC is disabled 00:04:19.591 EAL: TSC frequency is ~2300000 KHz 00:04:19.591 EAL: Main lcore 0 is ready (tid=7f09d24f3a00;cpuset=[0]) 00:04:19.591 EAL: Trying to obtain current memory policy. 00:04:19.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.591 EAL: Restoring previous memory policy: 0 00:04:19.591 EAL: request: mp_malloc_sync 00:04:19.591 EAL: No shared files mode enabled, IPC is disabled 00:04:19.591 EAL: Heap on socket 0 was expanded by 2MB 00:04:19.591 EAL: No shared files mode enabled, IPC is disabled 00:04:19.591 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:19.591 EAL: Mem event callback 'spdk:(nil)' registered 00:04:19.591 00:04:19.591 00:04:19.591 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.591 http://cunit.sourceforge.net/ 00:04:19.591 00:04:19.592 00:04:19.592 Suite: components_suite 00:04:19.592 Test: vtophys_malloc_test ...passed 00:04:19.592 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:19.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.592 EAL: Restoring previous memory policy: 4 00:04:19.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.592 EAL: request: mp_malloc_sync 00:04:19.592 EAL: No shared files mode enabled, IPC is disabled 00:04:19.592 EAL: Heap on socket 0 was expanded by 4MB 00:04:19.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.592 EAL: request: mp_malloc_sync 00:04:19.592 EAL: No shared files mode enabled, IPC is disabled 00:04:19.592 EAL: Heap on socket 0 was shrunk by 4MB 00:04:19.592 EAL: Trying to obtain current memory policy. 00:04:19.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.592 EAL: Restoring previous memory policy: 4 00:04:19.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.592 EAL: request: mp_malloc_sync 00:04:19.592 EAL: No shared files mode enabled, IPC is disabled 00:04:19.592 EAL: Heap on socket 0 was expanded by 6MB 00:04:19.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.592 EAL: request: mp_malloc_sync 00:04:19.592 EAL: No shared files mode enabled, IPC is disabled 00:04:19.592 EAL: Heap on socket 0 was shrunk by 6MB 00:04:19.592 EAL: Trying to obtain current memory policy. 00:04:19.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.592 EAL: Restoring previous memory policy: 4 00:04:19.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.592 EAL: request: mp_malloc_sync 00:04:19.592 EAL: No shared files mode enabled, IPC is disabled 00:04:19.592 EAL: Heap on socket 0 was expanded by 10MB 00:04:19.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.592 EAL: request: mp_malloc_sync 00:04:19.592 EAL: No shared files mode enabled, IPC is disabled 00:04:19.592 EAL: Heap on socket 0 was shrunk by 10MB 00:04:19.592 EAL: Trying to obtain current memory policy. 00:04:19.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.592 EAL: Restoring previous memory policy: 4 00:04:19.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.592 EAL: request: mp_malloc_sync 00:04:19.592 EAL: No shared files mode enabled, IPC is disabled 00:04:19.592 EAL: Heap on socket 0 was expanded by 18MB 00:04:19.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.592 EAL: request: mp_malloc_sync 00:04:19.592 EAL: No shared files mode enabled, IPC is disabled 00:04:19.592 EAL: Heap on socket 0 was shrunk by 18MB 00:04:19.592 EAL: Trying to obtain current memory policy. 00:04:19.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.592 EAL: Restoring previous memory policy: 4 00:04:19.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.592 EAL: request: mp_malloc_sync 00:04:19.592 EAL: No shared files mode enabled, IPC is disabled 00:04:19.592 EAL: Heap on socket 0 was expanded by 34MB 00:04:19.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.592 EAL: request: mp_malloc_sync 00:04:19.592 EAL: No shared files mode enabled, IPC is disabled 00:04:19.592 EAL: Heap on socket 0 was shrunk by 34MB 00:04:19.592 EAL: Trying to obtain current memory policy. 00:04:19.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.592 EAL: Restoring previous memory policy: 4 00:04:19.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.592 EAL: request: mp_malloc_sync 00:04:19.592 EAL: No shared files mode enabled, IPC is disabled 00:04:19.592 EAL: Heap on socket 0 was expanded by 66MB 00:04:19.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.592 EAL: request: mp_malloc_sync 00:04:19.592 EAL: No shared files mode enabled, IPC is disabled 00:04:19.592 EAL: Heap on socket 0 was shrunk by 66MB 00:04:19.592 EAL: Trying to obtain current memory policy. 00:04:19.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.592 EAL: Restoring previous memory policy: 4 00:04:19.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.592 EAL: request: mp_malloc_sync 00:04:19.592 EAL: No shared files mode enabled, IPC is disabled 00:04:19.592 EAL: Heap on socket 0 was expanded by 130MB 00:04:19.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.592 EAL: request: mp_malloc_sync 00:04:19.592 EAL: No shared files mode enabled, IPC is disabled 00:04:19.592 EAL: Heap on socket 0 was shrunk by 130MB 00:04:19.592 EAL: Trying to obtain current memory policy. 00:04:19.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.851 EAL: Restoring previous memory policy: 4 00:04:19.851 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.851 EAL: request: mp_malloc_sync 00:04:19.851 EAL: No shared files mode enabled, IPC is disabled 00:04:19.851 EAL: Heap on socket 0 was expanded by 258MB 00:04:19.851 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.851 EAL: request: mp_malloc_sync 00:04:19.851 EAL: No shared files mode enabled, IPC is disabled 00:04:19.851 EAL: Heap on socket 0 was shrunk by 258MB 00:04:19.851 EAL: Trying to obtain current memory policy. 00:04:19.851 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.851 EAL: Restoring previous memory policy: 4 00:04:19.851 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.851 EAL: request: mp_malloc_sync 00:04:19.851 EAL: No shared files mode enabled, IPC is disabled 00:04:19.851 EAL: Heap on socket 0 was expanded by 514MB 00:04:19.851 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.110 EAL: request: mp_malloc_sync 00:04:20.110 EAL: No shared files mode enabled, IPC is disabled 00:04:20.110 EAL: Heap on socket 0 was shrunk by 514MB 00:04:20.110 EAL: Trying to obtain current memory policy. 00:04:20.110 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.370 EAL: Restoring previous memory policy: 4 00:04:20.370 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.370 EAL: request: mp_malloc_sync 00:04:20.370 EAL: No shared files mode enabled, IPC is disabled 00:04:20.370 EAL: Heap on socket 0 was expanded by 1026MB 00:04:20.370 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.629 EAL: request: mp_malloc_sync 00:04:20.629 EAL: No shared files mode enabled, IPC is disabled 00:04:20.629 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:20.629 passed 00:04:20.629 00:04:20.629 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.629 suites 1 1 n/a 0 0 00:04:20.629 tests 2 2 2 0 0 00:04:20.629 asserts 497 497 497 0 n/a 00:04:20.629 00:04:20.629 Elapsed time = 0.965 seconds 00:04:20.629 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.629 EAL: request: mp_malloc_sync 00:04:20.629 EAL: No shared files mode enabled, IPC is disabled 00:04:20.629 EAL: Heap on socket 0 was shrunk by 2MB 00:04:20.629 EAL: No shared files mode enabled, IPC is disabled 00:04:20.629 EAL: No shared files mode enabled, IPC is disabled 00:04:20.629 EAL: No shared files mode enabled, IPC is disabled 00:04:20.629 00:04:20.629 real 0m1.076s 00:04:20.629 user 0m0.636s 00:04:20.629 sys 0m0.419s 00:04:20.629 07:47:14 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.629 07:47:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:20.629 ************************************ 00:04:20.629 END TEST env_vtophys 00:04:20.629 ************************************ 00:04:20.629 07:47:14 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:20.629 07:47:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.629 07:47:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.629 07:47:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.629 ************************************ 00:04:20.629 START TEST env_pci 00:04:20.629 ************************************ 00:04:20.629 07:47:14 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:20.629 00:04:20.629 00:04:20.629 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.629 http://cunit.sourceforge.net/ 00:04:20.629 00:04:20.629 00:04:20.629 Suite: pci 00:04:20.629 Test: pci_hook ...[2024-11-27 07:47:14.620557] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2254340 has claimed it 00:04:20.629 EAL: Cannot find device (10000:00:01.0) 00:04:20.629 EAL: Failed to attach device on primary process 00:04:20.629 passed 00:04:20.629 00:04:20.629 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.629 suites 1 1 n/a 0 0 00:04:20.630 tests 1 1 1 0 0 00:04:20.630 asserts 25 25 25 0 n/a 00:04:20.630 00:04:20.630 Elapsed time = 0.027 seconds 00:04:20.630 00:04:20.630 real 0m0.042s 00:04:20.630 user 0m0.014s 00:04:20.630 sys 0m0.028s 00:04:20.630 07:47:14 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.630 07:47:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:20.630 ************************************ 00:04:20.630 END TEST env_pci 00:04:20.630 ************************************ 00:04:20.630 07:47:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:20.630 07:47:14 env -- env/env.sh@15 -- # uname 00:04:20.630 07:47:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:20.630 07:47:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:20.630 07:47:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:20.630 07:47:14 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:20.630 07:47:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.630 07:47:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.630 ************************************ 00:04:20.630 START TEST env_dpdk_post_init 00:04:20.630 ************************************ 00:04:20.630 07:47:14 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:20.890 EAL: Detected CPU lcores: 96 00:04:20.890 EAL: Detected NUMA nodes: 2 00:04:20.890 EAL: Detected shared linkage of DPDK 00:04:20.890 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:20.890 EAL: Selected IOVA mode 'VA' 00:04:20.890 EAL: VFIO support initialized 00:04:20.890 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:20.890 EAL: Using IOMMU type 1 (Type 1) 00:04:20.890 EAL: Ignore mapping IO port bar(1) 00:04:20.890 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:20.890 EAL: Ignore mapping IO port bar(1) 00:04:20.890 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:20.890 EAL: Ignore mapping IO port bar(1) 00:04:20.890 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:20.890 EAL: Ignore mapping IO port bar(1) 00:04:20.890 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:20.890 EAL: Ignore mapping IO port bar(1) 00:04:20.890 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:20.890 EAL: Ignore mapping IO port bar(1) 00:04:20.890 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:20.890 EAL: Ignore mapping IO port bar(1) 00:04:20.890 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:20.890 EAL: Ignore mapping IO port bar(1) 00:04:20.890 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:21.829 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:21.829 EAL: Ignore mapping IO port bar(1) 00:04:21.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:21.829 EAL: Ignore mapping IO port bar(1) 00:04:21.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:21.829 EAL: Ignore mapping IO port bar(1) 00:04:21.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:21.829 EAL: Ignore mapping IO port bar(1) 00:04:21.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:21.829 EAL: Ignore mapping IO port bar(1) 00:04:21.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:21.829 EAL: Ignore mapping IO port bar(1) 00:04:21.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:21.829 EAL: Ignore mapping IO port bar(1) 00:04:21.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:21.829 EAL: Ignore mapping IO port bar(1) 00:04:21.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:25.116 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:25.116 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:25.116 Starting DPDK initialization... 00:04:25.116 Starting SPDK post initialization... 00:04:25.116 SPDK NVMe probe 00:04:25.116 Attaching to 0000:5e:00.0 00:04:25.116 Attached to 0000:5e:00.0 00:04:25.116 Cleaning up... 00:04:25.116 00:04:25.116 real 0m4.392s 00:04:25.116 user 0m3.006s 00:04:25.116 sys 0m0.455s 00:04:25.116 07:47:19 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.116 07:47:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:25.116 ************************************ 00:04:25.116 END TEST env_dpdk_post_init 00:04:25.116 ************************************ 00:04:25.116 07:47:19 env -- env/env.sh@26 -- # uname 00:04:25.116 07:47:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:25.116 07:47:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:25.116 07:47:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.116 07:47:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.116 07:47:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.116 ************************************ 00:04:25.116 START TEST env_mem_callbacks 00:04:25.116 ************************************ 00:04:25.116 07:47:19 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:25.116 EAL: Detected CPU lcores: 96 00:04:25.116 EAL: Detected NUMA nodes: 2 00:04:25.116 EAL: Detected shared linkage of DPDK 00:04:25.116 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:25.116 EAL: Selected IOVA mode 'VA' 00:04:25.116 EAL: VFIO support initialized 00:04:25.116 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:25.116 00:04:25.116 00:04:25.116 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.116 http://cunit.sourceforge.net/ 00:04:25.116 00:04:25.116 00:04:25.116 Suite: memory 00:04:25.116 Test: test ... 00:04:25.116 register 0x200000200000 2097152 00:04:25.116 malloc 3145728 00:04:25.116 register 0x200000400000 4194304 00:04:25.116 buf 0x200000500000 len 3145728 PASSED 00:04:25.116 malloc 64 00:04:25.116 buf 0x2000004fff40 len 64 PASSED 00:04:25.116 malloc 4194304 00:04:25.116 register 0x200000800000 6291456 00:04:25.116 buf 0x200000a00000 len 4194304 PASSED 00:04:25.116 free 0x200000500000 3145728 00:04:25.116 free 0x2000004fff40 64 00:04:25.116 unregister 0x200000400000 4194304 PASSED 00:04:25.116 free 0x200000a00000 4194304 00:04:25.116 unregister 0x200000800000 6291456 PASSED 00:04:25.116 malloc 8388608 00:04:25.116 register 0x200000400000 10485760 00:04:25.116 buf 0x200000600000 len 8388608 PASSED 00:04:25.116 free 0x200000600000 8388608 00:04:25.116 unregister 0x200000400000 10485760 PASSED 00:04:25.116 passed 00:04:25.116 00:04:25.116 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.116 suites 1 1 n/a 0 0 00:04:25.116 tests 1 1 1 0 0 00:04:25.116 asserts 15 15 15 0 n/a 00:04:25.116 00:04:25.116 Elapsed time = 0.004 seconds 00:04:25.116 00:04:25.116 real 0m0.035s 00:04:25.116 user 0m0.010s 00:04:25.116 sys 0m0.026s 00:04:25.116 07:47:19 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.116 07:47:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:25.116 ************************************ 00:04:25.116 END TEST env_mem_callbacks 00:04:25.116 ************************************ 00:04:25.374 00:04:25.374 real 0m6.200s 00:04:25.374 user 0m4.062s 00:04:25.374 sys 0m1.220s 00:04:25.374 07:47:19 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.374 07:47:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.374 ************************************ 00:04:25.374 END TEST env 00:04:25.374 ************************************ 00:04:25.374 07:47:19 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:25.374 07:47:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.374 07:47:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.374 07:47:19 -- common/autotest_common.sh@10 -- # set +x 00:04:25.374 ************************************ 00:04:25.374 START TEST rpc 00:04:25.374 ************************************ 00:04:25.374 07:47:19 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:25.374 * Looking for test storage... 00:04:25.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:25.374 07:47:19 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:25.374 07:47:19 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:25.374 07:47:19 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:25.374 07:47:19 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:25.374 07:47:19 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.374 07:47:19 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.374 07:47:19 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.374 07:47:19 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.374 07:47:19 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.374 07:47:19 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.374 07:47:19 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.374 07:47:19 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.374 07:47:19 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.374 07:47:19 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.374 07:47:19 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.374 07:47:19 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:25.374 07:47:19 rpc -- scripts/common.sh@345 -- # : 1 00:04:25.374 07:47:19 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.374 07:47:19 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.374 07:47:19 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:25.374 07:47:19 rpc -- scripts/common.sh@353 -- # local d=1 00:04:25.374 07:47:19 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.374 07:47:19 rpc -- scripts/common.sh@355 -- # echo 1 00:04:25.374 07:47:19 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.374 07:47:19 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:25.374 07:47:19 rpc -- scripts/common.sh@353 -- # local d=2 00:04:25.374 07:47:19 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.374 07:47:19 rpc -- scripts/common.sh@355 -- # echo 2 00:04:25.374 07:47:19 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.374 07:47:19 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.374 07:47:19 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.374 07:47:19 rpc -- scripts/common.sh@368 -- # return 0 00:04:25.374 07:47:19 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.374 07:47:19 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:25.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.374 --rc genhtml_branch_coverage=1 00:04:25.374 --rc genhtml_function_coverage=1 00:04:25.374 --rc genhtml_legend=1 00:04:25.374 --rc geninfo_all_blocks=1 00:04:25.374 --rc geninfo_unexecuted_blocks=1 00:04:25.374 00:04:25.374 ' 00:04:25.374 07:47:19 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:25.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.374 --rc genhtml_branch_coverage=1 00:04:25.374 --rc genhtml_function_coverage=1 00:04:25.374 --rc genhtml_legend=1 00:04:25.374 --rc geninfo_all_blocks=1 00:04:25.374 --rc geninfo_unexecuted_blocks=1 00:04:25.375 00:04:25.375 ' 00:04:25.375 07:47:19 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:25.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.375 --rc genhtml_branch_coverage=1 00:04:25.375 --rc genhtml_function_coverage=1 00:04:25.375 --rc genhtml_legend=1 00:04:25.375 --rc geninfo_all_blocks=1 00:04:25.375 --rc geninfo_unexecuted_blocks=1 00:04:25.375 00:04:25.375 ' 00:04:25.375 07:47:19 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:25.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.375 --rc genhtml_branch_coverage=1 00:04:25.375 --rc genhtml_function_coverage=1 00:04:25.375 --rc genhtml_legend=1 00:04:25.375 --rc geninfo_all_blocks=1 00:04:25.375 --rc geninfo_unexecuted_blocks=1 00:04:25.375 00:04:25.375 ' 00:04:25.375 07:47:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2255257 00:04:25.375 07:47:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.375 07:47:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2255257 00:04:25.375 07:47:19 rpc -- common/autotest_common.sh@835 -- # '[' -z 2255257 ']' 00:04:25.375 07:47:19 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.375 07:47:19 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:25.375 07:47:19 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.375 07:47:19 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.375 07:47:19 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.375 07:47:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.634 [2024-11-27 07:47:19.527743] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:04:25.634 [2024-11-27 07:47:19.527789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2255257 ] 00:04:25.634 [2024-11-27 07:47:19.590132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.634 [2024-11-27 07:47:19.632158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:25.634 [2024-11-27 07:47:19.632195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2255257' to capture a snapshot of events at runtime. 00:04:25.634 [2024-11-27 07:47:19.632204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:25.634 [2024-11-27 07:47:19.632209] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:25.634 [2024-11-27 07:47:19.632214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2255257 for offline analysis/debug. 00:04:25.634 [2024-11-27 07:47:19.632755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.893 07:47:19 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.893 07:47:19 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:25.893 07:47:19 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:25.893 07:47:19 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:25.893 07:47:19 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:25.893 07:47:19 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:25.893 07:47:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.893 07:47:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.893 07:47:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.893 ************************************ 00:04:25.893 START TEST rpc_integrity 00:04:25.893 ************************************ 00:04:25.893 07:47:19 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:25.893 07:47:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:25.893 07:47:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.893 07:47:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.893 07:47:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.893 07:47:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:25.893 07:47:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:25.893 07:47:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:25.893 07:47:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:25.893 07:47:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.893 07:47:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.893 07:47:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.893 07:47:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:25.893 07:47:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:25.893 07:47:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.893 07:47:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.893 07:47:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.893 07:47:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:25.893 { 00:04:25.893 "name": "Malloc0", 00:04:25.893 "aliases": [ 00:04:25.893 "5cd9f994-4983-48c4-9837-688fee789f52" 00:04:25.893 ], 00:04:25.893 "product_name": "Malloc disk", 00:04:25.893 "block_size": 512, 00:04:25.893 "num_blocks": 16384, 00:04:25.893 "uuid": "5cd9f994-4983-48c4-9837-688fee789f52", 00:04:25.893 "assigned_rate_limits": { 00:04:25.893 "rw_ios_per_sec": 0, 00:04:25.893 "rw_mbytes_per_sec": 0, 00:04:25.893 "r_mbytes_per_sec": 0, 00:04:25.893 "w_mbytes_per_sec": 0 00:04:25.893 }, 00:04:25.893 "claimed": false, 00:04:25.893 "zoned": false, 00:04:25.893 "supported_io_types": { 00:04:25.893 "read": true, 00:04:25.893 "write": true, 00:04:25.893 "unmap": true, 00:04:25.893 "flush": true, 00:04:25.893 "reset": true, 00:04:25.893 "nvme_admin": false, 00:04:25.893 "nvme_io": false, 00:04:25.893 "nvme_io_md": false, 00:04:25.893 "write_zeroes": true, 00:04:25.893 "zcopy": true, 00:04:25.893 "get_zone_info": false, 00:04:25.893 "zone_management": false, 00:04:25.893 "zone_append": false, 00:04:25.893 "compare": false, 00:04:25.893 "compare_and_write": false, 00:04:25.893 "abort": true, 00:04:25.893 "seek_hole": false, 00:04:25.893 "seek_data": false, 00:04:25.893 "copy": true, 00:04:25.893 "nvme_iov_md": false 00:04:25.893 }, 00:04:25.893 "memory_domains": [ 00:04:25.893 { 00:04:25.893 "dma_device_id": "system", 00:04:25.893 "dma_device_type": 1 00:04:25.893 }, 00:04:25.893 { 00:04:25.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.893 "dma_device_type": 2 00:04:25.893 } 00:04:25.893 ], 00:04:25.893 "driver_specific": {} 00:04:25.893 } 00:04:25.893 ]' 00:04:25.893 07:47:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:25.893 07:47:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:25.893 07:47:19 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:25.893 07:47:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.893 07:47:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.893 [2024-11-27 07:47:19.976580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:25.893 [2024-11-27 07:47:19.976610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:25.893 [2024-11-27 07:47:19.976621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x138d280 00:04:25.893 [2024-11-27 07:47:19.976628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:25.893 [2024-11-27 07:47:19.977730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:25.893 [2024-11-27 07:47:19.977751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:25.893 Passthru0 00:04:25.893 07:47:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.893 07:47:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:25.893 07:47:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.893 07:47:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.152 07:47:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.152 07:47:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:26.152 { 00:04:26.152 "name": "Malloc0", 00:04:26.152 "aliases": [ 00:04:26.152 "5cd9f994-4983-48c4-9837-688fee789f52" 00:04:26.152 ], 00:04:26.152 "product_name": "Malloc disk", 00:04:26.152 "block_size": 512, 00:04:26.152 "num_blocks": 16384, 00:04:26.152 "uuid": "5cd9f994-4983-48c4-9837-688fee789f52", 00:04:26.152 "assigned_rate_limits": { 00:04:26.152 "rw_ios_per_sec": 0, 00:04:26.152 "rw_mbytes_per_sec": 0, 00:04:26.152 "r_mbytes_per_sec": 0, 00:04:26.152 "w_mbytes_per_sec": 0 00:04:26.152 }, 00:04:26.152 "claimed": true, 00:04:26.152 "claim_type": "exclusive_write", 00:04:26.152 "zoned": false, 00:04:26.152 "supported_io_types": { 00:04:26.152 "read": true, 00:04:26.152 "write": true, 00:04:26.152 "unmap": true, 00:04:26.152 "flush": true, 00:04:26.152 "reset": true, 00:04:26.152 "nvme_admin": false, 00:04:26.152 "nvme_io": false, 00:04:26.152 "nvme_io_md": false, 00:04:26.152 "write_zeroes": true, 00:04:26.152 "zcopy": true, 00:04:26.152 "get_zone_info": false, 00:04:26.152 "zone_management": false, 00:04:26.152 "zone_append": false, 00:04:26.152 "compare": false, 00:04:26.152 "compare_and_write": false, 00:04:26.152 "abort": true, 00:04:26.152 "seek_hole": false, 00:04:26.152 "seek_data": false, 00:04:26.152 "copy": true, 00:04:26.152 "nvme_iov_md": false 00:04:26.152 }, 00:04:26.152 "memory_domains": [ 00:04:26.152 { 00:04:26.152 "dma_device_id": "system", 00:04:26.152 "dma_device_type": 1 00:04:26.152 }, 00:04:26.152 { 00:04:26.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.152 "dma_device_type": 2 00:04:26.152 } 00:04:26.152 ], 00:04:26.152 "driver_specific": {} 00:04:26.152 }, 00:04:26.152 { 00:04:26.152 "name": "Passthru0", 00:04:26.152 "aliases": [ 00:04:26.152 "2d9481ef-6197-5e2d-9bb8-e6236263092c" 00:04:26.152 ], 00:04:26.152 "product_name": "passthru", 00:04:26.152 "block_size": 512, 00:04:26.152 "num_blocks": 16384, 00:04:26.152 "uuid": "2d9481ef-6197-5e2d-9bb8-e6236263092c", 00:04:26.152 "assigned_rate_limits": { 00:04:26.152 "rw_ios_per_sec": 0, 00:04:26.152 "rw_mbytes_per_sec": 0, 00:04:26.152 "r_mbytes_per_sec": 0, 00:04:26.152 "w_mbytes_per_sec": 0 00:04:26.152 }, 00:04:26.152 "claimed": false, 00:04:26.152 "zoned": false, 00:04:26.152 "supported_io_types": { 00:04:26.152 "read": true, 00:04:26.152 "write": true, 00:04:26.152 "unmap": true, 00:04:26.152 "flush": true, 00:04:26.152 "reset": true, 00:04:26.152 "nvme_admin": false, 00:04:26.152 "nvme_io": false, 00:04:26.152 "nvme_io_md": false, 00:04:26.152 "write_zeroes": true, 00:04:26.152 "zcopy": true, 00:04:26.152 "get_zone_info": false, 00:04:26.152 "zone_management": false, 00:04:26.152 "zone_append": false, 00:04:26.152 "compare": false, 00:04:26.152 "compare_and_write": false, 00:04:26.152 "abort": true, 00:04:26.152 "seek_hole": false, 00:04:26.152 "seek_data": false, 00:04:26.152 "copy": true, 00:04:26.152 "nvme_iov_md": false 00:04:26.152 }, 00:04:26.152 "memory_domains": [ 00:04:26.152 { 00:04:26.152 "dma_device_id": "system", 00:04:26.152 "dma_device_type": 1 00:04:26.152 }, 00:04:26.152 { 00:04:26.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.152 "dma_device_type": 2 00:04:26.152 } 00:04:26.152 ], 00:04:26.152 "driver_specific": { 00:04:26.152 "passthru": { 00:04:26.152 "name": "Passthru0", 00:04:26.152 "base_bdev_name": "Malloc0" 00:04:26.152 } 00:04:26.152 } 00:04:26.152 } 00:04:26.152 ]' 00:04:26.152 07:47:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:26.152 07:47:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:26.152 07:47:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:26.152 07:47:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.152 07:47:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.152 07:47:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.152 07:47:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:26.152 07:47:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.152 07:47:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.152 07:47:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.152 07:47:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:26.152 07:47:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.152 07:47:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.152 07:47:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.152 07:47:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:26.152 07:47:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:26.152 07:47:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:26.152 00:04:26.152 real 0m0.253s 00:04:26.152 user 0m0.171s 00:04:26.152 sys 0m0.028s 00:04:26.152 07:47:20 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.152 07:47:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.152 ************************************ 00:04:26.152 END TEST rpc_integrity 00:04:26.152 ************************************ 00:04:26.152 07:47:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:26.152 07:47:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.152 07:47:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.152 07:47:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.152 ************************************ 00:04:26.152 START TEST rpc_plugins 00:04:26.152 ************************************ 00:04:26.152 07:47:20 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:26.152 07:47:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:26.152 07:47:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.152 07:47:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.152 07:47:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.152 07:47:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:26.152 07:47:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:26.152 07:47:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.152 07:47:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.152 07:47:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.152 07:47:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:26.152 { 00:04:26.152 "name": "Malloc1", 00:04:26.152 "aliases": [ 00:04:26.152 "b319eaed-ae8d-4aed-b480-0e0bce4a91d7" 00:04:26.152 ], 00:04:26.152 "product_name": "Malloc disk", 00:04:26.152 "block_size": 4096, 00:04:26.153 "num_blocks": 256, 00:04:26.153 "uuid": "b319eaed-ae8d-4aed-b480-0e0bce4a91d7", 00:04:26.153 "assigned_rate_limits": { 00:04:26.153 "rw_ios_per_sec": 0, 00:04:26.153 "rw_mbytes_per_sec": 0, 00:04:26.153 "r_mbytes_per_sec": 0, 00:04:26.153 "w_mbytes_per_sec": 0 00:04:26.153 }, 00:04:26.153 "claimed": false, 00:04:26.153 "zoned": false, 00:04:26.153 "supported_io_types": { 00:04:26.153 "read": true, 00:04:26.153 "write": true, 00:04:26.153 "unmap": true, 00:04:26.153 "flush": true, 00:04:26.153 "reset": true, 00:04:26.153 "nvme_admin": false, 00:04:26.153 "nvme_io": false, 00:04:26.153 "nvme_io_md": false, 00:04:26.153 "write_zeroes": true, 00:04:26.153 "zcopy": true, 00:04:26.153 "get_zone_info": false, 00:04:26.153 "zone_management": false, 00:04:26.153 "zone_append": false, 00:04:26.153 "compare": false, 00:04:26.153 "compare_and_write": false, 00:04:26.153 "abort": true, 00:04:26.153 "seek_hole": false, 00:04:26.153 "seek_data": false, 00:04:26.153 "copy": true, 00:04:26.153 "nvme_iov_md": false 00:04:26.153 }, 00:04:26.153 "memory_domains": [ 00:04:26.153 { 00:04:26.153 "dma_device_id": "system", 00:04:26.153 "dma_device_type": 1 00:04:26.153 }, 00:04:26.153 { 00:04:26.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.153 "dma_device_type": 2 00:04:26.153 } 00:04:26.153 ], 00:04:26.153 "driver_specific": {} 00:04:26.153 } 00:04:26.153 ]' 00:04:26.153 07:47:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:26.153 07:47:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:26.153 07:47:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:26.153 07:47:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.153 07:47:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.410 07:47:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.410 07:47:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:26.410 07:47:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.410 07:47:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.410 07:47:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.410 07:47:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:26.410 07:47:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:26.410 07:47:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:26.410 00:04:26.410 real 0m0.141s 00:04:26.410 user 0m0.094s 00:04:26.410 sys 0m0.016s 00:04:26.410 07:47:20 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.410 07:47:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.410 ************************************ 00:04:26.410 END TEST rpc_plugins 00:04:26.410 ************************************ 00:04:26.410 07:47:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:26.410 07:47:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.410 07:47:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.410 07:47:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.410 ************************************ 00:04:26.410 START TEST rpc_trace_cmd_test 00:04:26.410 ************************************ 00:04:26.410 07:47:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:26.410 07:47:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:26.410 07:47:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:26.410 07:47:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.410 07:47:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:26.410 07:47:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.410 07:47:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:26.410 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2255257", 00:04:26.410 "tpoint_group_mask": "0x8", 00:04:26.410 "iscsi_conn": { 00:04:26.410 "mask": "0x2", 00:04:26.410 "tpoint_mask": "0x0" 00:04:26.410 }, 00:04:26.410 "scsi": { 00:04:26.410 "mask": "0x4", 00:04:26.410 "tpoint_mask": "0x0" 00:04:26.410 }, 00:04:26.410 "bdev": { 00:04:26.410 "mask": "0x8", 00:04:26.410 "tpoint_mask": "0xffffffffffffffff" 00:04:26.410 }, 00:04:26.410 "nvmf_rdma": { 00:04:26.410 "mask": "0x10", 00:04:26.410 "tpoint_mask": "0x0" 00:04:26.410 }, 00:04:26.410 "nvmf_tcp": { 00:04:26.410 "mask": "0x20", 00:04:26.410 "tpoint_mask": "0x0" 00:04:26.410 }, 00:04:26.410 "ftl": { 00:04:26.410 "mask": "0x40", 00:04:26.410 "tpoint_mask": "0x0" 00:04:26.410 }, 00:04:26.410 "blobfs": { 00:04:26.410 "mask": "0x80", 00:04:26.410 "tpoint_mask": "0x0" 00:04:26.410 }, 00:04:26.410 "dsa": { 00:04:26.410 "mask": "0x200", 00:04:26.410 "tpoint_mask": "0x0" 00:04:26.410 }, 00:04:26.410 "thread": { 00:04:26.410 "mask": "0x400", 00:04:26.410 "tpoint_mask": "0x0" 00:04:26.410 }, 00:04:26.410 "nvme_pcie": { 00:04:26.410 "mask": "0x800", 00:04:26.410 "tpoint_mask": "0x0" 00:04:26.410 }, 00:04:26.410 "iaa": { 00:04:26.410 "mask": "0x1000", 00:04:26.410 "tpoint_mask": "0x0" 00:04:26.410 }, 00:04:26.410 "nvme_tcp": { 00:04:26.410 "mask": "0x2000", 00:04:26.410 "tpoint_mask": "0x0" 00:04:26.410 }, 00:04:26.410 "bdev_nvme": { 00:04:26.410 "mask": "0x4000", 00:04:26.410 "tpoint_mask": "0x0" 00:04:26.410 }, 00:04:26.410 "sock": { 00:04:26.410 "mask": "0x8000", 00:04:26.410 "tpoint_mask": "0x0" 00:04:26.410 }, 00:04:26.410 "blob": { 00:04:26.410 "mask": "0x10000", 00:04:26.410 "tpoint_mask": "0x0" 00:04:26.410 }, 00:04:26.410 "bdev_raid": { 00:04:26.410 "mask": "0x20000", 00:04:26.410 "tpoint_mask": "0x0" 00:04:26.410 }, 00:04:26.410 "scheduler": { 00:04:26.411 "mask": "0x40000", 00:04:26.411 "tpoint_mask": "0x0" 00:04:26.411 } 00:04:26.411 }' 00:04:26.411 07:47:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:26.411 07:47:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:26.411 07:47:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:26.411 07:47:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:26.411 07:47:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:26.668 07:47:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:26.668 07:47:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:26.668 07:47:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:26.668 07:47:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:26.668 07:47:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:26.668 00:04:26.668 real 0m0.222s 00:04:26.668 user 0m0.188s 00:04:26.668 sys 0m0.028s 00:04:26.668 07:47:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.668 07:47:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:26.668 ************************************ 00:04:26.668 END TEST rpc_trace_cmd_test 00:04:26.668 ************************************ 00:04:26.668 07:47:20 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:26.668 07:47:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:26.668 07:47:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:26.668 07:47:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.668 07:47:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.668 07:47:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.668 ************************************ 00:04:26.668 START TEST rpc_daemon_integrity 00:04:26.668 ************************************ 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:26.668 { 00:04:26.668 "name": "Malloc2", 00:04:26.668 "aliases": [ 00:04:26.668 "6ea5445d-6981-4fd2-88aa-d9537646bf4d" 00:04:26.668 ], 00:04:26.668 "product_name": "Malloc disk", 00:04:26.668 "block_size": 512, 00:04:26.668 "num_blocks": 16384, 00:04:26.668 "uuid": "6ea5445d-6981-4fd2-88aa-d9537646bf4d", 00:04:26.668 "assigned_rate_limits": { 00:04:26.668 "rw_ios_per_sec": 0, 00:04:26.668 "rw_mbytes_per_sec": 0, 00:04:26.668 "r_mbytes_per_sec": 0, 00:04:26.668 "w_mbytes_per_sec": 0 00:04:26.668 }, 00:04:26.668 "claimed": false, 00:04:26.668 "zoned": false, 00:04:26.668 "supported_io_types": { 00:04:26.668 "read": true, 00:04:26.668 "write": true, 00:04:26.668 "unmap": true, 00:04:26.668 "flush": true, 00:04:26.668 "reset": true, 00:04:26.668 "nvme_admin": false, 00:04:26.668 "nvme_io": false, 00:04:26.668 "nvme_io_md": false, 00:04:26.668 "write_zeroes": true, 00:04:26.668 "zcopy": true, 00:04:26.668 "get_zone_info": false, 00:04:26.668 "zone_management": false, 00:04:26.668 "zone_append": false, 00:04:26.668 "compare": false, 00:04:26.668 "compare_and_write": false, 00:04:26.668 "abort": true, 00:04:26.668 "seek_hole": false, 00:04:26.668 "seek_data": false, 00:04:26.668 "copy": true, 00:04:26.668 "nvme_iov_md": false 00:04:26.668 }, 00:04:26.668 "memory_domains": [ 00:04:26.668 { 00:04:26.668 "dma_device_id": "system", 00:04:26.668 "dma_device_type": 1 00:04:26.668 }, 00:04:26.668 { 00:04:26.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.668 "dma_device_type": 2 00:04:26.668 } 00:04:26.668 ], 00:04:26.668 "driver_specific": {} 00:04:26.668 } 00:04:26.668 ]' 00:04:26.668 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.926 [2024-11-27 07:47:20.810851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:26.926 [2024-11-27 07:47:20.810878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:26.926 [2024-11-27 07:47:20.810890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x138f150 00:04:26.926 [2024-11-27 07:47:20.810896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:26.926 [2024-11-27 07:47:20.811911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:26.926 [2024-11-27 07:47:20.811930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:26.926 Passthru0 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:26.926 { 00:04:26.926 "name": "Malloc2", 00:04:26.926 "aliases": [ 00:04:26.926 "6ea5445d-6981-4fd2-88aa-d9537646bf4d" 00:04:26.926 ], 00:04:26.926 "product_name": "Malloc disk", 00:04:26.926 "block_size": 512, 00:04:26.926 "num_blocks": 16384, 00:04:26.926 "uuid": "6ea5445d-6981-4fd2-88aa-d9537646bf4d", 00:04:26.926 "assigned_rate_limits": { 00:04:26.926 "rw_ios_per_sec": 0, 00:04:26.926 "rw_mbytes_per_sec": 0, 00:04:26.926 "r_mbytes_per_sec": 0, 00:04:26.926 "w_mbytes_per_sec": 0 00:04:26.926 }, 00:04:26.926 "claimed": true, 00:04:26.926 "claim_type": "exclusive_write", 00:04:26.926 "zoned": false, 00:04:26.926 "supported_io_types": { 00:04:26.926 "read": true, 00:04:26.926 "write": true, 00:04:26.926 "unmap": true, 00:04:26.926 "flush": true, 00:04:26.926 "reset": true, 00:04:26.926 "nvme_admin": false, 00:04:26.926 "nvme_io": false, 00:04:26.926 "nvme_io_md": false, 00:04:26.926 "write_zeroes": true, 00:04:26.926 "zcopy": true, 00:04:26.926 "get_zone_info": false, 00:04:26.926 "zone_management": false, 00:04:26.926 "zone_append": false, 00:04:26.926 "compare": false, 00:04:26.926 "compare_and_write": false, 00:04:26.926 "abort": true, 00:04:26.926 "seek_hole": false, 00:04:26.926 "seek_data": false, 00:04:26.926 "copy": true, 00:04:26.926 "nvme_iov_md": false 00:04:26.926 }, 00:04:26.926 "memory_domains": [ 00:04:26.926 { 00:04:26.926 "dma_device_id": "system", 00:04:26.926 "dma_device_type": 1 00:04:26.926 }, 00:04:26.926 { 00:04:26.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.926 "dma_device_type": 2 00:04:26.926 } 00:04:26.926 ], 00:04:26.926 "driver_specific": {} 00:04:26.926 }, 00:04:26.926 { 00:04:26.926 "name": "Passthru0", 00:04:26.926 "aliases": [ 00:04:26.926 "c626b830-fbde-52f5-ab49-d1127c894dc5" 00:04:26.926 ], 00:04:26.926 "product_name": "passthru", 00:04:26.926 "block_size": 512, 00:04:26.926 "num_blocks": 16384, 00:04:26.926 "uuid": "c626b830-fbde-52f5-ab49-d1127c894dc5", 00:04:26.926 "assigned_rate_limits": { 00:04:26.926 "rw_ios_per_sec": 0, 00:04:26.926 "rw_mbytes_per_sec": 0, 00:04:26.926 "r_mbytes_per_sec": 0, 00:04:26.926 "w_mbytes_per_sec": 0 00:04:26.926 }, 00:04:26.926 "claimed": false, 00:04:26.926 "zoned": false, 00:04:26.926 "supported_io_types": { 00:04:26.926 "read": true, 00:04:26.926 "write": true, 00:04:26.926 "unmap": true, 00:04:26.926 "flush": true, 00:04:26.926 "reset": true, 00:04:26.926 "nvme_admin": false, 00:04:26.926 "nvme_io": false, 00:04:26.926 "nvme_io_md": false, 00:04:26.926 "write_zeroes": true, 00:04:26.926 "zcopy": true, 00:04:26.926 "get_zone_info": false, 00:04:26.926 "zone_management": false, 00:04:26.926 "zone_append": false, 00:04:26.926 "compare": false, 00:04:26.926 "compare_and_write": false, 00:04:26.926 "abort": true, 00:04:26.926 "seek_hole": false, 00:04:26.926 "seek_data": false, 00:04:26.926 "copy": true, 00:04:26.926 "nvme_iov_md": false 00:04:26.926 }, 00:04:26.926 "memory_domains": [ 00:04:26.926 { 00:04:26.926 "dma_device_id": "system", 00:04:26.926 "dma_device_type": 1 00:04:26.926 }, 00:04:26.926 { 00:04:26.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.926 "dma_device_type": 2 00:04:26.926 } 00:04:26.926 ], 00:04:26.926 "driver_specific": { 00:04:26.926 "passthru": { 00:04:26.926 "name": "Passthru0", 00:04:26.926 "base_bdev_name": "Malloc2" 00:04:26.926 } 00:04:26.926 } 00:04:26.926 } 00:04:26.926 ]' 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.926 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.927 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.927 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:26.927 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.927 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.927 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.927 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:26.927 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:26.927 07:47:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:26.927 00:04:26.927 real 0m0.263s 00:04:26.927 user 0m0.181s 00:04:26.927 sys 0m0.029s 00:04:26.927 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.927 07:47:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.927 ************************************ 00:04:26.927 END TEST rpc_daemon_integrity 00:04:26.927 ************************************ 00:04:26.927 07:47:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:26.927 07:47:20 rpc -- rpc/rpc.sh@84 -- # killprocess 2255257 00:04:26.927 07:47:20 rpc -- common/autotest_common.sh@954 -- # '[' -z 2255257 ']' 00:04:26.927 07:47:20 rpc -- common/autotest_common.sh@958 -- # kill -0 2255257 00:04:26.927 07:47:20 rpc -- common/autotest_common.sh@959 -- # uname 00:04:26.927 07:47:20 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.927 07:47:20 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2255257 00:04:26.927 07:47:21 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.927 07:47:21 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.927 07:47:21 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2255257' 00:04:26.927 killing process with pid 2255257 00:04:26.927 07:47:21 rpc -- common/autotest_common.sh@973 -- # kill 2255257 00:04:26.927 07:47:21 rpc -- common/autotest_common.sh@978 -- # wait 2255257 00:04:27.495 00:04:27.495 real 0m2.018s 00:04:27.495 user 0m2.618s 00:04:27.495 sys 0m0.651s 00:04:27.495 07:47:21 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.495 07:47:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.495 ************************************ 00:04:27.495 END TEST rpc 00:04:27.495 ************************************ 00:04:27.495 07:47:21 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:27.495 07:47:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.495 07:47:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.495 07:47:21 -- common/autotest_common.sh@10 -- # set +x 00:04:27.495 ************************************ 00:04:27.495 START TEST skip_rpc 00:04:27.495 ************************************ 00:04:27.495 07:47:21 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:27.495 * Looking for test storage... 00:04:27.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.495 07:47:21 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:27.495 07:47:21 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:27.495 07:47:21 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:27.495 07:47:21 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.495 07:47:21 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:27.495 07:47:21 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.495 07:47:21 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:27.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.495 --rc genhtml_branch_coverage=1 00:04:27.495 --rc genhtml_function_coverage=1 00:04:27.495 --rc genhtml_legend=1 00:04:27.495 --rc geninfo_all_blocks=1 00:04:27.495 --rc geninfo_unexecuted_blocks=1 00:04:27.495 00:04:27.495 ' 00:04:27.495 07:47:21 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:27.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.495 --rc genhtml_branch_coverage=1 00:04:27.495 --rc genhtml_function_coverage=1 00:04:27.495 --rc genhtml_legend=1 00:04:27.495 --rc geninfo_all_blocks=1 00:04:27.495 --rc geninfo_unexecuted_blocks=1 00:04:27.495 00:04:27.495 ' 00:04:27.495 07:47:21 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:27.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.495 --rc genhtml_branch_coverage=1 00:04:27.495 --rc genhtml_function_coverage=1 00:04:27.495 --rc genhtml_legend=1 00:04:27.495 --rc geninfo_all_blocks=1 00:04:27.495 --rc geninfo_unexecuted_blocks=1 00:04:27.495 00:04:27.495 ' 00:04:27.495 07:47:21 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:27.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.495 --rc genhtml_branch_coverage=1 00:04:27.495 --rc genhtml_function_coverage=1 00:04:27.495 --rc genhtml_legend=1 00:04:27.495 --rc geninfo_all_blocks=1 00:04:27.495 --rc geninfo_unexecuted_blocks=1 00:04:27.495 00:04:27.495 ' 00:04:27.495 07:47:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:27.495 07:47:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:27.495 07:47:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:27.495 07:47:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.495 07:47:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.495 07:47:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.753 ************************************ 00:04:27.753 START TEST skip_rpc 00:04:27.753 ************************************ 00:04:27.753 07:47:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:27.753 07:47:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2255897 00:04:27.753 07:47:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.753 07:47:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:27.753 07:47:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:27.753 [2024-11-27 07:47:21.662580] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:04:27.753 [2024-11-27 07:47:21.662620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2255897 ] 00:04:27.753 [2024-11-27 07:47:21.723634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.753 [2024-11-27 07:47:21.763995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.023 07:47:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:33.023 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:33.023 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:33.023 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:33.023 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:33.023 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:33.023 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:33.023 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:33.023 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.023 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.023 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:33.024 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:33.024 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:33.024 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:33.024 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:33.024 07:47:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:33.024 07:47:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2255897 00:04:33.024 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2255897 ']' 00:04:33.024 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2255897 00:04:33.024 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:33.024 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.024 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2255897 00:04:33.024 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.024 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.024 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2255897' 00:04:33.024 killing process with pid 2255897 00:04:33.024 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2255897 00:04:33.024 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2255897 00:04:33.024 00:04:33.024 real 0m5.371s 00:04:33.024 user 0m5.145s 00:04:33.024 sys 0m0.266s 00:04:33.024 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.024 07:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.024 ************************************ 00:04:33.024 END TEST skip_rpc 00:04:33.024 ************************************ 00:04:33.024 07:47:27 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:33.024 07:47:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.024 07:47:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.024 07:47:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.024 ************************************ 00:04:33.024 START TEST skip_rpc_with_json 00:04:33.024 ************************************ 00:04:33.024 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:33.024 07:47:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:33.024 07:47:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2256842 00:04:33.024 07:47:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.024 07:47:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:33.024 07:47:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2256842 00:04:33.024 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2256842 ']' 00:04:33.024 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.024 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.024 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.024 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.024 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.024 [2024-11-27 07:47:27.106128] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:04:33.024 [2024-11-27 07:47:27.106173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2256842 ] 00:04:33.282 [2024-11-27 07:47:27.166653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.282 [2024-11-27 07:47:27.206586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.539 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.540 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:33.540 07:47:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:33.540 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.540 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.540 [2024-11-27 07:47:27.422336] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:33.540 request: 00:04:33.540 { 00:04:33.540 "trtype": "tcp", 00:04:33.540 "method": "nvmf_get_transports", 00:04:33.540 "req_id": 1 00:04:33.540 } 00:04:33.540 Got JSON-RPC error response 00:04:33.540 response: 00:04:33.540 { 00:04:33.540 "code": -19, 00:04:33.540 "message": "No such device" 00:04:33.540 } 00:04:33.540 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:33.540 07:47:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:33.540 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.540 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.540 [2024-11-27 07:47:27.430435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:33.540 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.540 07:47:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:33.540 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.540 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.540 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.540 07:47:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:33.540 { 00:04:33.540 "subsystems": [ 00:04:33.540 { 00:04:33.540 "subsystem": "fsdev", 00:04:33.540 "config": [ 00:04:33.540 { 00:04:33.540 "method": "fsdev_set_opts", 00:04:33.540 "params": { 00:04:33.540 "fsdev_io_pool_size": 65535, 00:04:33.540 "fsdev_io_cache_size": 256 00:04:33.540 } 00:04:33.540 } 00:04:33.540 ] 00:04:33.540 }, 00:04:33.540 { 00:04:33.540 "subsystem": "vfio_user_target", 00:04:33.540 "config": null 00:04:33.540 }, 00:04:33.540 { 00:04:33.540 "subsystem": "keyring", 00:04:33.540 "config": [] 00:04:33.540 }, 00:04:33.540 { 00:04:33.540 "subsystem": "iobuf", 00:04:33.540 "config": [ 00:04:33.540 { 00:04:33.540 "method": "iobuf_set_options", 00:04:33.540 "params": { 00:04:33.540 "small_pool_count": 8192, 00:04:33.540 "large_pool_count": 1024, 00:04:33.540 "small_bufsize": 8192, 00:04:33.540 "large_bufsize": 135168, 00:04:33.540 "enable_numa": false 00:04:33.540 } 00:04:33.540 } 00:04:33.540 ] 00:04:33.540 }, 00:04:33.540 { 00:04:33.540 "subsystem": "sock", 00:04:33.540 "config": [ 00:04:33.540 { 00:04:33.540 "method": "sock_set_default_impl", 00:04:33.540 "params": { 00:04:33.540 "impl_name": "posix" 00:04:33.540 } 00:04:33.540 }, 00:04:33.540 { 00:04:33.540 "method": "sock_impl_set_options", 00:04:33.540 "params": { 00:04:33.540 "impl_name": "ssl", 00:04:33.540 "recv_buf_size": 4096, 00:04:33.540 "send_buf_size": 4096, 00:04:33.540 "enable_recv_pipe": true, 00:04:33.540 "enable_quickack": false, 00:04:33.540 "enable_placement_id": 0, 00:04:33.540 "enable_zerocopy_send_server": true, 00:04:33.540 "enable_zerocopy_send_client": false, 00:04:33.540 "zerocopy_threshold": 0, 00:04:33.540 "tls_version": 0, 00:04:33.540 "enable_ktls": false 00:04:33.540 } 00:04:33.540 }, 00:04:33.540 { 00:04:33.540 "method": "sock_impl_set_options", 00:04:33.540 "params": { 00:04:33.540 "impl_name": "posix", 00:04:33.540 "recv_buf_size": 2097152, 00:04:33.540 "send_buf_size": 2097152, 00:04:33.540 "enable_recv_pipe": true, 00:04:33.540 "enable_quickack": false, 00:04:33.540 "enable_placement_id": 0, 00:04:33.540 "enable_zerocopy_send_server": true, 00:04:33.540 "enable_zerocopy_send_client": false, 00:04:33.540 "zerocopy_threshold": 0, 00:04:33.540 "tls_version": 0, 00:04:33.540 "enable_ktls": false 00:04:33.540 } 00:04:33.540 } 00:04:33.540 ] 00:04:33.540 }, 00:04:33.540 { 00:04:33.540 "subsystem": "vmd", 00:04:33.540 "config": [] 00:04:33.540 }, 00:04:33.540 { 00:04:33.540 "subsystem": "accel", 00:04:33.540 "config": [ 00:04:33.540 { 00:04:33.540 "method": "accel_set_options", 00:04:33.540 "params": { 00:04:33.540 "small_cache_size": 128, 00:04:33.540 "large_cache_size": 16, 00:04:33.540 "task_count": 2048, 00:04:33.540 "sequence_count": 2048, 00:04:33.540 "buf_count": 2048 00:04:33.540 } 00:04:33.540 } 00:04:33.540 ] 00:04:33.540 }, 00:04:33.540 { 00:04:33.540 "subsystem": "bdev", 00:04:33.540 "config": [ 00:04:33.540 { 00:04:33.540 "method": "bdev_set_options", 00:04:33.540 "params": { 00:04:33.540 "bdev_io_pool_size": 65535, 00:04:33.540 "bdev_io_cache_size": 256, 00:04:33.540 "bdev_auto_examine": true, 00:04:33.540 "iobuf_small_cache_size": 128, 00:04:33.540 "iobuf_large_cache_size": 16 00:04:33.540 } 00:04:33.540 }, 00:04:33.540 { 00:04:33.540 "method": "bdev_raid_set_options", 00:04:33.540 "params": { 00:04:33.540 "process_window_size_kb": 1024, 00:04:33.540 "process_max_bandwidth_mb_sec": 0 00:04:33.540 } 00:04:33.540 }, 00:04:33.540 { 00:04:33.540 "method": "bdev_iscsi_set_options", 00:04:33.540 "params": { 00:04:33.540 "timeout_sec": 30 00:04:33.540 } 00:04:33.540 }, 00:04:33.540 { 00:04:33.540 "method": "bdev_nvme_set_options", 00:04:33.540 "params": { 00:04:33.540 "action_on_timeout": "none", 00:04:33.540 "timeout_us": 0, 00:04:33.540 "timeout_admin_us": 0, 00:04:33.540 "keep_alive_timeout_ms": 10000, 00:04:33.540 "arbitration_burst": 0, 00:04:33.540 "low_priority_weight": 0, 00:04:33.540 "medium_priority_weight": 0, 00:04:33.540 "high_priority_weight": 0, 00:04:33.540 "nvme_adminq_poll_period_us": 10000, 00:04:33.540 "nvme_ioq_poll_period_us": 0, 00:04:33.540 "io_queue_requests": 0, 00:04:33.540 "delay_cmd_submit": true, 00:04:33.540 "transport_retry_count": 4, 00:04:33.540 "bdev_retry_count": 3, 00:04:33.540 "transport_ack_timeout": 0, 00:04:33.540 "ctrlr_loss_timeout_sec": 0, 00:04:33.540 "reconnect_delay_sec": 0, 00:04:33.540 "fast_io_fail_timeout_sec": 0, 00:04:33.540 "disable_auto_failback": false, 00:04:33.540 "generate_uuids": false, 00:04:33.540 "transport_tos": 0, 00:04:33.540 "nvme_error_stat": false, 00:04:33.540 "rdma_srq_size": 0, 00:04:33.540 "io_path_stat": false, 00:04:33.540 "allow_accel_sequence": false, 00:04:33.540 "rdma_max_cq_size": 0, 00:04:33.540 "rdma_cm_event_timeout_ms": 0, 00:04:33.540 "dhchap_digests": [ 00:04:33.540 "sha256", 00:04:33.540 "sha384", 00:04:33.540 "sha512" 00:04:33.540 ], 00:04:33.540 "dhchap_dhgroups": [ 00:04:33.540 "null", 00:04:33.540 "ffdhe2048", 00:04:33.540 "ffdhe3072", 00:04:33.540 "ffdhe4096", 00:04:33.540 "ffdhe6144", 00:04:33.540 "ffdhe8192" 00:04:33.540 ] 00:04:33.541 } 00:04:33.541 }, 00:04:33.541 { 00:04:33.541 "method": "bdev_nvme_set_hotplug", 00:04:33.541 "params": { 00:04:33.541 "period_us": 100000, 00:04:33.541 "enable": false 00:04:33.541 } 00:04:33.541 }, 00:04:33.541 { 00:04:33.541 "method": "bdev_wait_for_examine" 00:04:33.541 } 00:04:33.541 ] 00:04:33.541 }, 00:04:33.541 { 00:04:33.541 "subsystem": "scsi", 00:04:33.541 "config": null 00:04:33.541 }, 00:04:33.541 { 00:04:33.541 "subsystem": "scheduler", 00:04:33.541 "config": [ 00:04:33.541 { 00:04:33.541 "method": "framework_set_scheduler", 00:04:33.541 "params": { 00:04:33.541 "name": "static" 00:04:33.541 } 00:04:33.541 } 00:04:33.541 ] 00:04:33.541 }, 00:04:33.541 { 00:04:33.541 "subsystem": "vhost_scsi", 00:04:33.541 "config": [] 00:04:33.541 }, 00:04:33.541 { 00:04:33.541 "subsystem": "vhost_blk", 00:04:33.541 "config": [] 00:04:33.541 }, 00:04:33.541 { 00:04:33.541 "subsystem": "ublk", 00:04:33.541 "config": [] 00:04:33.541 }, 00:04:33.541 { 00:04:33.541 "subsystem": "nbd", 00:04:33.541 "config": [] 00:04:33.541 }, 00:04:33.541 { 00:04:33.541 "subsystem": "nvmf", 00:04:33.541 "config": [ 00:04:33.541 { 00:04:33.541 "method": "nvmf_set_config", 00:04:33.541 "params": { 00:04:33.541 "discovery_filter": "match_any", 00:04:33.541 "admin_cmd_passthru": { 00:04:33.541 "identify_ctrlr": false 00:04:33.541 }, 00:04:33.541 "dhchap_digests": [ 00:04:33.541 "sha256", 00:04:33.541 "sha384", 00:04:33.541 "sha512" 00:04:33.541 ], 00:04:33.541 "dhchap_dhgroups": [ 00:04:33.541 "null", 00:04:33.541 "ffdhe2048", 00:04:33.541 "ffdhe3072", 00:04:33.541 "ffdhe4096", 00:04:33.541 "ffdhe6144", 00:04:33.541 "ffdhe8192" 00:04:33.541 ] 00:04:33.541 } 00:04:33.541 }, 00:04:33.541 { 00:04:33.541 "method": "nvmf_set_max_subsystems", 00:04:33.541 "params": { 00:04:33.541 "max_subsystems": 1024 00:04:33.541 } 00:04:33.541 }, 00:04:33.541 { 00:04:33.541 "method": "nvmf_set_crdt", 00:04:33.541 "params": { 00:04:33.541 "crdt1": 0, 00:04:33.541 "crdt2": 0, 00:04:33.541 "crdt3": 0 00:04:33.541 } 00:04:33.541 }, 00:04:33.541 { 00:04:33.541 "method": "nvmf_create_transport", 00:04:33.541 "params": { 00:04:33.541 "trtype": "TCP", 00:04:33.541 "max_queue_depth": 128, 00:04:33.541 "max_io_qpairs_per_ctrlr": 127, 00:04:33.541 "in_capsule_data_size": 4096, 00:04:33.541 "max_io_size": 131072, 00:04:33.541 "io_unit_size": 131072, 00:04:33.541 "max_aq_depth": 128, 00:04:33.541 "num_shared_buffers": 511, 00:04:33.541 "buf_cache_size": 4294967295, 00:04:33.541 "dif_insert_or_strip": false, 00:04:33.541 "zcopy": false, 00:04:33.541 "c2h_success": true, 00:04:33.541 "sock_priority": 0, 00:04:33.541 "abort_timeout_sec": 1, 00:04:33.541 "ack_timeout": 0, 00:04:33.541 "data_wr_pool_size": 0 00:04:33.541 } 00:04:33.541 } 00:04:33.541 ] 00:04:33.541 }, 00:04:33.541 { 00:04:33.541 "subsystem": "iscsi", 00:04:33.541 "config": [ 00:04:33.541 { 00:04:33.541 "method": "iscsi_set_options", 00:04:33.541 "params": { 00:04:33.541 "node_base": "iqn.2016-06.io.spdk", 00:04:33.541 "max_sessions": 128, 00:04:33.541 "max_connections_per_session": 2, 00:04:33.541 "max_queue_depth": 64, 00:04:33.541 "default_time2wait": 2, 00:04:33.541 "default_time2retain": 20, 00:04:33.541 "first_burst_length": 8192, 00:04:33.541 "immediate_data": true, 00:04:33.541 "allow_duplicated_isid": false, 00:04:33.541 "error_recovery_level": 0, 00:04:33.541 "nop_timeout": 60, 00:04:33.541 "nop_in_interval": 30, 00:04:33.541 "disable_chap": false, 00:04:33.541 "require_chap": false, 00:04:33.541 "mutual_chap": false, 00:04:33.541 "chap_group": 0, 00:04:33.541 "max_large_datain_per_connection": 64, 00:04:33.541 "max_r2t_per_connection": 4, 00:04:33.541 "pdu_pool_size": 36864, 00:04:33.541 "immediate_data_pool_size": 16384, 00:04:33.541 "data_out_pool_size": 2048 00:04:33.541 } 00:04:33.541 } 00:04:33.541 ] 00:04:33.541 } 00:04:33.541 ] 00:04:33.541 } 00:04:33.541 07:47:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:33.541 07:47:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2256842 00:04:33.541 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2256842 ']' 00:04:33.541 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2256842 00:04:33.541 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:33.541 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.541 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2256842 00:04:33.541 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.541 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.541 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2256842' 00:04:33.541 killing process with pid 2256842 00:04:33.541 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2256842 00:04:33.541 07:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2256842 00:04:34.109 07:47:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2256865 00:04:34.109 07:47:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:34.109 07:47:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:39.381 07:47:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2256865 00:04:39.381 07:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2256865 ']' 00:04:39.381 07:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2256865 00:04:39.381 07:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:39.381 07:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.381 07:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2256865 00:04:39.381 07:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.381 07:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.381 07:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2256865' 00:04:39.381 killing process with pid 2256865 00:04:39.381 07:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2256865 00:04:39.381 07:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2256865 00:04:39.381 07:47:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:39.381 07:47:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:39.381 00:04:39.381 real 0m6.238s 00:04:39.381 user 0m5.930s 00:04:39.381 sys 0m0.572s 00:04:39.381 07:47:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.381 07:47:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.381 ************************************ 00:04:39.381 END TEST skip_rpc_with_json 00:04:39.381 ************************************ 00:04:39.381 07:47:33 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:39.381 07:47:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.381 07:47:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.381 07:47:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.381 ************************************ 00:04:39.381 START TEST skip_rpc_with_delay 00:04:39.381 ************************************ 00:04:39.381 07:47:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:39.381 07:47:33 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.381 07:47:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:39.381 07:47:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.381 07:47:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.381 07:47:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.381 07:47:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.381 07:47:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.381 07:47:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.381 07:47:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.382 07:47:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.382 07:47:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:39.382 07:47:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.382 [2024-11-27 07:47:33.408718] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:39.382 07:47:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:39.382 07:47:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:39.382 07:47:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:39.382 07:47:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:39.382 00:04:39.382 real 0m0.064s 00:04:39.382 user 0m0.041s 00:04:39.382 sys 0m0.022s 00:04:39.382 07:47:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.382 07:47:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:39.382 ************************************ 00:04:39.382 END TEST skip_rpc_with_delay 00:04:39.382 ************************************ 00:04:39.382 07:47:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:39.382 07:47:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:39.382 07:47:33 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:39.382 07:47:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.382 07:47:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.382 07:47:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.382 ************************************ 00:04:39.382 START TEST exit_on_failed_rpc_init 00:04:39.382 ************************************ 00:04:39.382 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:39.382 07:47:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2257836 00:04:39.382 07:47:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2257836 00:04:39.382 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2257836 ']' 00:04:39.382 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.382 07:47:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.382 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.382 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.382 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.382 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.642 [2024-11-27 07:47:33.519580] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:04:39.642 [2024-11-27 07:47:33.519620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2257836 ] 00:04:39.642 [2024-11-27 07:47:33.580686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.642 [2024-11-27 07:47:33.624413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.902 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.902 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:39.902 07:47:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.902 07:47:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.902 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:39.902 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.902 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.902 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.902 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.902 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.902 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.902 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.902 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.902 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:39.902 07:47:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.902 [2024-11-27 07:47:33.887748] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:04:39.902 [2024-11-27 07:47:33.887795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2258017 ] 00:04:39.902 [2024-11-27 07:47:33.950126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.902 [2024-11-27 07:47:33.991248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.902 [2024-11-27 07:47:33.991318] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:39.902 [2024-11-27 07:47:33.991328] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:39.902 [2024-11-27 07:47:33.991337] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:40.162 07:47:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:40.162 07:47:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.162 07:47:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:40.162 07:47:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:40.162 07:47:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:40.162 07:47:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.162 07:47:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:40.162 07:47:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2257836 00:04:40.162 07:47:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2257836 ']' 00:04:40.162 07:47:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2257836 00:04:40.162 07:47:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:40.162 07:47:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.162 07:47:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2257836 00:04:40.162 07:47:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.162 07:47:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.162 07:47:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2257836' 00:04:40.162 killing process with pid 2257836 00:04:40.162 07:47:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2257836 00:04:40.162 07:47:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2257836 00:04:40.422 00:04:40.422 real 0m0.907s 00:04:40.422 user 0m0.974s 00:04:40.422 sys 0m0.363s 00:04:40.422 07:47:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.422 07:47:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.422 ************************************ 00:04:40.422 END TEST exit_on_failed_rpc_init 00:04:40.422 ************************************ 00:04:40.422 07:47:34 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:40.422 00:04:40.422 real 0m13.023s 00:04:40.422 user 0m12.289s 00:04:40.422 sys 0m1.495s 00:04:40.422 07:47:34 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.422 07:47:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.422 ************************************ 00:04:40.422 END TEST skip_rpc 00:04:40.422 ************************************ 00:04:40.422 07:47:34 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:40.422 07:47:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.422 07:47:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.422 07:47:34 -- common/autotest_common.sh@10 -- # set +x 00:04:40.422 ************************************ 00:04:40.422 START TEST rpc_client 00:04:40.422 ************************************ 00:04:40.422 07:47:34 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:40.682 * Looking for test storage... 00:04:40.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:40.682 07:47:34 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:40.682 07:47:34 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:40.682 07:47:34 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:40.682 07:47:34 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:40.682 07:47:34 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.682 07:47:34 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.682 07:47:34 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.682 07:47:34 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.682 07:47:34 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.682 07:47:34 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.682 07:47:34 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.682 07:47:34 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.682 07:47:34 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.682 07:47:34 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.683 07:47:34 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.683 07:47:34 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:40.683 07:47:34 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:40.683 07:47:34 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.683 07:47:34 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.683 07:47:34 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:40.683 07:47:34 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:40.683 07:47:34 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.683 07:47:34 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:40.683 07:47:34 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.683 07:47:34 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:40.683 07:47:34 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:40.683 07:47:34 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.683 07:47:34 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:40.683 07:47:34 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.683 07:47:34 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.683 07:47:34 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.683 07:47:34 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:40.683 07:47:34 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.683 07:47:34 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:40.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.683 --rc genhtml_branch_coverage=1 00:04:40.683 --rc genhtml_function_coverage=1 00:04:40.683 --rc genhtml_legend=1 00:04:40.683 --rc geninfo_all_blocks=1 00:04:40.683 --rc geninfo_unexecuted_blocks=1 00:04:40.683 00:04:40.683 ' 00:04:40.683 07:47:34 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:40.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.683 --rc genhtml_branch_coverage=1 00:04:40.683 --rc genhtml_function_coverage=1 00:04:40.683 --rc genhtml_legend=1 00:04:40.683 --rc geninfo_all_blocks=1 00:04:40.683 --rc geninfo_unexecuted_blocks=1 00:04:40.683 00:04:40.683 ' 00:04:40.683 07:47:34 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:40.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.683 --rc genhtml_branch_coverage=1 00:04:40.683 --rc genhtml_function_coverage=1 00:04:40.683 --rc genhtml_legend=1 00:04:40.683 --rc geninfo_all_blocks=1 00:04:40.683 --rc geninfo_unexecuted_blocks=1 00:04:40.683 00:04:40.683 ' 00:04:40.683 07:47:34 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:40.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.683 --rc genhtml_branch_coverage=1 00:04:40.683 --rc genhtml_function_coverage=1 00:04:40.683 --rc genhtml_legend=1 00:04:40.683 --rc geninfo_all_blocks=1 00:04:40.683 --rc geninfo_unexecuted_blocks=1 00:04:40.683 00:04:40.683 ' 00:04:40.683 07:47:34 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:40.683 OK 00:04:40.683 07:47:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:40.683 00:04:40.683 real 0m0.182s 00:04:40.683 user 0m0.105s 00:04:40.683 sys 0m0.091s 00:04:40.683 07:47:34 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.683 07:47:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:40.683 ************************************ 00:04:40.683 END TEST rpc_client 00:04:40.683 ************************************ 00:04:40.683 07:47:34 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:40.683 07:47:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.683 07:47:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.683 07:47:34 -- common/autotest_common.sh@10 -- # set +x 00:04:40.683 ************************************ 00:04:40.683 START TEST json_config 00:04:40.683 ************************************ 00:04:40.683 07:47:34 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:40.944 07:47:34 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:40.944 07:47:34 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:40.944 07:47:34 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:40.944 07:47:34 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:40.944 07:47:34 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.944 07:47:34 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.944 07:47:34 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.944 07:47:34 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.944 07:47:34 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.944 07:47:34 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.944 07:47:34 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.944 07:47:34 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.944 07:47:34 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.944 07:47:34 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.944 07:47:34 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.944 07:47:34 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:40.944 07:47:34 json_config -- scripts/common.sh@345 -- # : 1 00:04:40.944 07:47:34 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.944 07:47:34 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.944 07:47:34 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:40.944 07:47:34 json_config -- scripts/common.sh@353 -- # local d=1 00:04:40.944 07:47:34 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.944 07:47:34 json_config -- scripts/common.sh@355 -- # echo 1 00:04:40.944 07:47:34 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.944 07:47:34 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:40.944 07:47:34 json_config -- scripts/common.sh@353 -- # local d=2 00:04:40.944 07:47:34 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.944 07:47:34 json_config -- scripts/common.sh@355 -- # echo 2 00:04:40.944 07:47:34 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.944 07:47:34 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.944 07:47:34 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.944 07:47:34 json_config -- scripts/common.sh@368 -- # return 0 00:04:40.944 07:47:34 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.944 07:47:34 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:40.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.944 --rc genhtml_branch_coverage=1 00:04:40.944 --rc genhtml_function_coverage=1 00:04:40.944 --rc genhtml_legend=1 00:04:40.944 --rc geninfo_all_blocks=1 00:04:40.944 --rc geninfo_unexecuted_blocks=1 00:04:40.944 00:04:40.944 ' 00:04:40.944 07:47:34 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:40.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.944 --rc genhtml_branch_coverage=1 00:04:40.944 --rc genhtml_function_coverage=1 00:04:40.944 --rc genhtml_legend=1 00:04:40.944 --rc geninfo_all_blocks=1 00:04:40.944 --rc geninfo_unexecuted_blocks=1 00:04:40.944 00:04:40.944 ' 00:04:40.944 07:47:34 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:40.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.944 --rc genhtml_branch_coverage=1 00:04:40.944 --rc genhtml_function_coverage=1 00:04:40.944 --rc genhtml_legend=1 00:04:40.944 --rc geninfo_all_blocks=1 00:04:40.944 --rc geninfo_unexecuted_blocks=1 00:04:40.944 00:04:40.944 ' 00:04:40.944 07:47:34 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:40.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.944 --rc genhtml_branch_coverage=1 00:04:40.944 --rc genhtml_function_coverage=1 00:04:40.944 --rc genhtml_legend=1 00:04:40.944 --rc geninfo_all_blocks=1 00:04:40.944 --rc geninfo_unexecuted_blocks=1 00:04:40.944 00:04:40.944 ' 00:04:40.944 07:47:34 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:40.944 07:47:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:40.944 07:47:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:40.944 07:47:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:40.944 07:47:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:40.944 07:47:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:40.944 07:47:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:40.944 07:47:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:40.944 07:47:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:40.944 07:47:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:40.944 07:47:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:40.944 07:47:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:40.944 07:47:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:40.944 07:47:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:40.944 07:47:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:40.944 07:47:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:40.944 07:47:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:40.944 07:47:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:40.944 07:47:34 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:40.944 07:47:34 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:40.944 07:47:34 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:40.944 07:47:34 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.944 07:47:34 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.944 07:47:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.945 07:47:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.945 07:47:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.945 07:47:34 json_config -- paths/export.sh@5 -- # export PATH 00:04:40.945 07:47:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.945 07:47:34 json_config -- nvmf/common.sh@51 -- # : 0 00:04:40.945 07:47:34 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:40.945 07:47:34 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:40.945 07:47:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:40.945 07:47:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:40.945 07:47:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:40.945 07:47:34 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:40.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:40.945 07:47:34 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:40.945 07:47:34 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:40.945 07:47:34 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:40.945 INFO: JSON configuration test init 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:40.945 07:47:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:40.945 07:47:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:40.945 07:47:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:40.945 07:47:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.945 07:47:34 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:40.945 07:47:34 json_config -- json_config/common.sh@9 -- # local app=target 00:04:40.945 07:47:34 json_config -- json_config/common.sh@10 -- # shift 00:04:40.945 07:47:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:40.945 07:47:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:40.945 07:47:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:40.945 07:47:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:40.945 07:47:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:40.945 07:47:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2258205 00:04:40.945 07:47:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:40.945 Waiting for target to run... 00:04:40.945 07:47:34 json_config -- json_config/common.sh@25 -- # waitforlisten 2258205 /var/tmp/spdk_tgt.sock 00:04:40.945 07:47:34 json_config -- common/autotest_common.sh@835 -- # '[' -z 2258205 ']' 00:04:40.945 07:47:34 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:40.945 07:47:34 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.945 07:47:34 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:40.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:40.945 07:47:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:40.945 07:47:34 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.945 07:47:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.945 [2024-11-27 07:47:34.975152] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:04:40.945 [2024-11-27 07:47:34.975201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2258205 ] 00:04:41.204 [2024-11-27 07:47:35.251344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.204 [2024-11-27 07:47:35.286003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.772 07:47:35 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.772 07:47:35 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:41.772 07:47:35 json_config -- json_config/common.sh@26 -- # echo '' 00:04:41.772 00:04:41.772 07:47:35 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:41.772 07:47:35 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:41.772 07:47:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:41.772 07:47:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.772 07:47:35 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:41.772 07:47:35 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:41.772 07:47:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:41.772 07:47:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.772 07:47:35 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:41.772 07:47:35 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:41.772 07:47:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:45.062 07:47:38 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:45.062 07:47:38 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:45.062 07:47:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.062 07:47:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.062 07:47:38 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:45.062 07:47:38 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:45.062 07:47:38 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:45.062 07:47:38 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:45.062 07:47:38 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:45.062 07:47:38 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:45.062 07:47:38 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:45.062 07:47:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:45.062 07:47:39 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:45.062 07:47:39 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:45.062 07:47:39 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:45.062 07:47:39 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:45.062 07:47:39 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:45.062 07:47:39 json_config -- json_config/json_config.sh@54 -- # sort 00:04:45.062 07:47:39 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:45.062 07:47:39 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:45.062 07:47:39 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:45.062 07:47:39 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:45.062 07:47:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:45.062 07:47:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.062 07:47:39 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:45.062 07:47:39 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:45.062 07:47:39 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:45.062 07:47:39 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:45.062 07:47:39 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:45.062 07:47:39 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:45.062 07:47:39 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:45.320 07:47:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.320 07:47:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.320 07:47:39 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:45.320 07:47:39 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:45.320 07:47:39 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:45.320 07:47:39 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:45.320 07:47:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:45.320 MallocForNvmf0 00:04:45.320 07:47:39 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:45.320 07:47:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:45.579 MallocForNvmf1 00:04:45.579 07:47:39 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:45.579 07:47:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:45.839 [2024-11-27 07:47:39.732403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:45.839 07:47:39 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:45.839 07:47:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:45.839 07:47:39 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:45.839 07:47:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:46.097 07:47:40 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:46.097 07:47:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:46.356 07:47:40 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:46.356 07:47:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:46.615 [2024-11-27 07:47:40.494805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:46.615 07:47:40 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:46.615 07:47:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.615 07:47:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.615 07:47:40 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:46.615 07:47:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.615 07:47:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.615 07:47:40 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:46.615 07:47:40 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:46.615 07:47:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:46.874 MallocBdevForConfigChangeCheck 00:04:46.874 07:47:40 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:46.874 07:47:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.874 07:47:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.874 07:47:40 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:46.874 07:47:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.132 07:47:41 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:47.132 INFO: shutting down applications... 00:04:47.132 07:47:41 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:47.132 07:47:41 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:47.132 07:47:41 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:47.132 07:47:41 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:49.039 Calling clear_iscsi_subsystem 00:04:49.039 Calling clear_nvmf_subsystem 00:04:49.039 Calling clear_nbd_subsystem 00:04:49.039 Calling clear_ublk_subsystem 00:04:49.039 Calling clear_vhost_blk_subsystem 00:04:49.039 Calling clear_vhost_scsi_subsystem 00:04:49.039 Calling clear_bdev_subsystem 00:04:49.039 07:47:42 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:49.039 07:47:42 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:49.039 07:47:42 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:49.039 07:47:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:49.039 07:47:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:49.039 07:47:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:49.039 07:47:43 json_config -- json_config/json_config.sh@352 -- # break 00:04:49.039 07:47:43 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:49.039 07:47:43 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:49.039 07:47:43 json_config -- json_config/common.sh@31 -- # local app=target 00:04:49.039 07:47:43 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:49.039 07:47:43 json_config -- json_config/common.sh@35 -- # [[ -n 2258205 ]] 00:04:49.039 07:47:43 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2258205 00:04:49.039 07:47:43 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:49.039 07:47:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.039 07:47:43 json_config -- json_config/common.sh@41 -- # kill -0 2258205 00:04:49.039 07:47:43 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.609 07:47:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.609 07:47:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.609 07:47:43 json_config -- json_config/common.sh@41 -- # kill -0 2258205 00:04:49.609 07:47:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:49.609 07:47:43 json_config -- json_config/common.sh@43 -- # break 00:04:49.609 07:47:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:49.609 07:47:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:49.609 SPDK target shutdown done 00:04:49.609 07:47:43 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:49.609 INFO: relaunching applications... 00:04:49.609 07:47:43 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:49.609 07:47:43 json_config -- json_config/common.sh@9 -- # local app=target 00:04:49.609 07:47:43 json_config -- json_config/common.sh@10 -- # shift 00:04:49.609 07:47:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:49.609 07:47:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:49.609 07:47:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:49.609 07:47:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.609 07:47:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.609 07:47:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2259923 00:04:49.609 07:47:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:49.609 Waiting for target to run... 00:04:49.609 07:47:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:49.609 07:47:43 json_config -- json_config/common.sh@25 -- # waitforlisten 2259923 /var/tmp/spdk_tgt.sock 00:04:49.609 07:47:43 json_config -- common/autotest_common.sh@835 -- # '[' -z 2259923 ']' 00:04:49.609 07:47:43 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:49.609 07:47:43 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.609 07:47:43 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:49.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:49.609 07:47:43 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.609 07:47:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.609 [2024-11-27 07:47:43.604580] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:04:49.609 [2024-11-27 07:47:43.604638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2259923 ] 00:04:49.868 [2024-11-27 07:47:43.883665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.868 [2024-11-27 07:47:43.918180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.159 [2024-11-27 07:47:46.950574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:53.159 [2024-11-27 07:47:46.982901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:53.159 07:47:47 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.159 07:47:47 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:53.159 07:47:47 json_config -- json_config/common.sh@26 -- # echo '' 00:04:53.159 00:04:53.159 07:47:47 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:53.159 07:47:47 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:53.159 INFO: Checking if target configuration is the same... 00:04:53.159 07:47:47 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:53.159 07:47:47 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:53.159 07:47:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:53.159 + '[' 2 -ne 2 ']' 00:04:53.159 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:53.159 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:53.159 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:53.159 +++ basename /dev/fd/62 00:04:53.159 ++ mktemp /tmp/62.XXX 00:04:53.159 + tmp_file_1=/tmp/62.0ot 00:04:53.159 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:53.159 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:53.159 + tmp_file_2=/tmp/spdk_tgt_config.json.GOv 00:04:53.159 + ret=0 00:04:53.159 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:53.421 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:53.421 + diff -u /tmp/62.0ot /tmp/spdk_tgt_config.json.GOv 00:04:53.421 + echo 'INFO: JSON config files are the same' 00:04:53.421 INFO: JSON config files are the same 00:04:53.421 + rm /tmp/62.0ot /tmp/spdk_tgt_config.json.GOv 00:04:53.421 + exit 0 00:04:53.421 07:47:47 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:53.421 07:47:47 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:53.421 INFO: changing configuration and checking if this can be detected... 00:04:53.421 07:47:47 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:53.421 07:47:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:53.681 07:47:47 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:53.681 07:47:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:53.681 07:47:47 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:53.681 + '[' 2 -ne 2 ']' 00:04:53.681 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:53.681 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:53.682 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:53.682 +++ basename /dev/fd/62 00:04:53.682 ++ mktemp /tmp/62.XXX 00:04:53.682 + tmp_file_1=/tmp/62.9Bw 00:04:53.682 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:53.682 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:53.682 + tmp_file_2=/tmp/spdk_tgt_config.json.7QA 00:04:53.682 + ret=0 00:04:53.682 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:53.964 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:53.964 + diff -u /tmp/62.9Bw /tmp/spdk_tgt_config.json.7QA 00:04:53.964 + ret=1 00:04:53.964 + echo '=== Start of file: /tmp/62.9Bw ===' 00:04:53.964 + cat /tmp/62.9Bw 00:04:53.964 + echo '=== End of file: /tmp/62.9Bw ===' 00:04:53.964 + echo '' 00:04:53.964 + echo '=== Start of file: /tmp/spdk_tgt_config.json.7QA ===' 00:04:53.964 + cat /tmp/spdk_tgt_config.json.7QA 00:04:53.964 + echo '=== End of file: /tmp/spdk_tgt_config.json.7QA ===' 00:04:53.964 + echo '' 00:04:53.964 + rm /tmp/62.9Bw /tmp/spdk_tgt_config.json.7QA 00:04:53.964 + exit 1 00:04:53.964 07:47:48 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:53.964 INFO: configuration change detected. 00:04:53.964 07:47:48 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:53.964 07:47:48 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:53.964 07:47:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.964 07:47:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.964 07:47:48 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:53.964 07:47:48 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:53.964 07:47:48 json_config -- json_config/json_config.sh@324 -- # [[ -n 2259923 ]] 00:04:53.964 07:47:48 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:53.964 07:47:48 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:53.964 07:47:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.964 07:47:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.964 07:47:48 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:53.964 07:47:48 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:53.964 07:47:48 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:53.964 07:47:48 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:53.964 07:47:48 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:53.964 07:47:48 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:53.964 07:47:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.964 07:47:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.338 07:47:48 json_config -- json_config/json_config.sh@330 -- # killprocess 2259923 00:04:54.338 07:47:48 json_config -- common/autotest_common.sh@954 -- # '[' -z 2259923 ']' 00:04:54.338 07:47:48 json_config -- common/autotest_common.sh@958 -- # kill -0 2259923 00:04:54.338 07:47:48 json_config -- common/autotest_common.sh@959 -- # uname 00:04:54.338 07:47:48 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.338 07:47:48 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2259923 00:04:54.338 07:47:48 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.338 07:47:48 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.338 07:47:48 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2259923' 00:04:54.338 killing process with pid 2259923 00:04:54.338 07:47:48 json_config -- common/autotest_common.sh@973 -- # kill 2259923 00:04:54.338 07:47:48 json_config -- common/autotest_common.sh@978 -- # wait 2259923 00:04:55.758 07:47:49 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.758 07:47:49 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:55.758 07:47:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:55.758 07:47:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.758 07:47:49 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:55.758 07:47:49 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:55.758 INFO: Success 00:04:55.758 00:04:55.758 real 0m14.897s 00:04:55.758 user 0m15.399s 00:04:55.758 sys 0m2.414s 00:04:55.758 07:47:49 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.758 07:47:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.758 ************************************ 00:04:55.758 END TEST json_config 00:04:55.758 ************************************ 00:04:55.758 07:47:49 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:55.758 07:47:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.758 07:47:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.758 07:47:49 -- common/autotest_common.sh@10 -- # set +x 00:04:55.758 ************************************ 00:04:55.758 START TEST json_config_extra_key 00:04:55.758 ************************************ 00:04:55.758 07:47:49 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:55.758 07:47:49 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:55.758 07:47:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:55.758 07:47:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:55.758 07:47:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:55.758 07:47:49 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.758 07:47:49 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.758 07:47:49 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.758 07:47:49 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.758 07:47:49 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.758 07:47:49 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.758 07:47:49 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.758 07:47:49 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.758 07:47:49 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.758 07:47:49 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.758 07:47:49 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.758 07:47:49 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:55.758 07:47:49 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:55.758 07:47:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.758 07:47:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.758 07:47:49 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:55.758 07:47:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:55.758 07:47:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.759 07:47:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:55.759 07:47:49 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.759 07:47:49 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:55.759 07:47:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:55.759 07:47:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.759 07:47:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:55.759 07:47:49 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.759 07:47:49 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.759 07:47:49 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.759 07:47:49 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:55.759 07:47:49 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.759 07:47:49 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:55.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.759 --rc genhtml_branch_coverage=1 00:04:55.759 --rc genhtml_function_coverage=1 00:04:55.759 --rc genhtml_legend=1 00:04:55.759 --rc geninfo_all_blocks=1 00:04:55.759 --rc geninfo_unexecuted_blocks=1 00:04:55.759 00:04:55.759 ' 00:04:55.759 07:47:49 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:55.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.759 --rc genhtml_branch_coverage=1 00:04:55.759 --rc genhtml_function_coverage=1 00:04:55.759 --rc genhtml_legend=1 00:04:55.759 --rc geninfo_all_blocks=1 00:04:55.759 --rc geninfo_unexecuted_blocks=1 00:04:55.759 00:04:55.759 ' 00:04:55.759 07:47:49 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:55.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.759 --rc genhtml_branch_coverage=1 00:04:55.759 --rc genhtml_function_coverage=1 00:04:55.759 --rc genhtml_legend=1 00:04:55.759 --rc geninfo_all_blocks=1 00:04:55.759 --rc geninfo_unexecuted_blocks=1 00:04:55.759 00:04:55.759 ' 00:04:55.759 07:47:49 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:55.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.759 --rc genhtml_branch_coverage=1 00:04:55.759 --rc genhtml_function_coverage=1 00:04:55.759 --rc genhtml_legend=1 00:04:55.759 --rc geninfo_all_blocks=1 00:04:55.759 --rc geninfo_unexecuted_blocks=1 00:04:55.759 00:04:55.759 ' 00:04:55.759 07:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:55.759 07:47:49 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:55.759 07:47:49 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.759 07:47:49 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.759 07:47:49 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.759 07:47:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.759 07:47:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.759 07:47:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.759 07:47:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:55.759 07:47:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:55.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:55.759 07:47:49 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:56.019 07:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:56.019 07:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:56.019 07:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:56.019 07:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:56.019 07:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:56.019 07:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:56.019 07:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:56.019 07:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:56.019 07:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:56.019 07:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:56.019 07:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:56.019 INFO: launching applications... 00:04:56.019 07:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:56.019 07:47:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:56.019 07:47:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:56.019 07:47:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.019 07:47:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.019 07:47:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.019 07:47:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.019 07:47:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.019 07:47:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2260987 00:04:56.019 07:47:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.019 Waiting for target to run... 00:04:56.019 07:47:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2260987 /var/tmp/spdk_tgt.sock 00:04:56.019 07:47:49 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2260987 ']' 00:04:56.019 07:47:49 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.019 07:47:49 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.019 07:47:49 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.019 07:47:49 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:56.019 07:47:49 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.019 07:47:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:56.019 [2024-11-27 07:47:49.919032] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:04:56.019 [2024-11-27 07:47:49.919078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2260987 ] 00:04:56.278 [2024-11-27 07:47:50.204618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.278 [2024-11-27 07:47:50.243482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.847 07:47:50 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.847 07:47:50 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:56.847 07:47:50 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:56.847 00:04:56.847 07:47:50 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:56.847 INFO: shutting down applications... 00:04:56.847 07:47:50 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:56.847 07:47:50 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:56.847 07:47:50 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:56.847 07:47:50 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2260987 ]] 00:04:56.847 07:47:50 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2260987 00:04:56.847 07:47:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:56.847 07:47:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.847 07:47:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2260987 00:04:56.847 07:47:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:57.416 07:47:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:57.416 07:47:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:57.416 07:47:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2260987 00:04:57.416 07:47:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:57.416 07:47:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:57.416 07:47:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:57.416 07:47:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:57.416 SPDK target shutdown done 00:04:57.416 07:47:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:57.416 Success 00:04:57.416 00:04:57.416 real 0m1.564s 00:04:57.416 user 0m1.367s 00:04:57.416 sys 0m0.382s 00:04:57.416 07:47:51 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.416 07:47:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:57.416 ************************************ 00:04:57.416 END TEST json_config_extra_key 00:04:57.416 ************************************ 00:04:57.416 07:47:51 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:57.416 07:47:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.416 07:47:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.416 07:47:51 -- common/autotest_common.sh@10 -- # set +x 00:04:57.416 ************************************ 00:04:57.416 START TEST alias_rpc 00:04:57.416 ************************************ 00:04:57.416 07:47:51 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:57.416 * Looking for test storage... 00:04:57.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:57.416 07:47:51 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.416 07:47:51 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.416 07:47:51 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.416 07:47:51 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.416 07:47:51 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:57.416 07:47:51 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.416 07:47:51 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.416 --rc genhtml_branch_coverage=1 00:04:57.416 --rc genhtml_function_coverage=1 00:04:57.416 --rc genhtml_legend=1 00:04:57.416 --rc geninfo_all_blocks=1 00:04:57.416 --rc geninfo_unexecuted_blocks=1 00:04:57.416 00:04:57.416 ' 00:04:57.416 07:47:51 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.416 --rc genhtml_branch_coverage=1 00:04:57.416 --rc genhtml_function_coverage=1 00:04:57.416 --rc genhtml_legend=1 00:04:57.416 --rc geninfo_all_blocks=1 00:04:57.416 --rc geninfo_unexecuted_blocks=1 00:04:57.416 00:04:57.416 ' 00:04:57.416 07:47:51 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.416 --rc genhtml_branch_coverage=1 00:04:57.416 --rc genhtml_function_coverage=1 00:04:57.417 --rc genhtml_legend=1 00:04:57.417 --rc geninfo_all_blocks=1 00:04:57.417 --rc geninfo_unexecuted_blocks=1 00:04:57.417 00:04:57.417 ' 00:04:57.417 07:47:51 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.417 --rc genhtml_branch_coverage=1 00:04:57.417 --rc genhtml_function_coverage=1 00:04:57.417 --rc genhtml_legend=1 00:04:57.417 --rc geninfo_all_blocks=1 00:04:57.417 --rc geninfo_unexecuted_blocks=1 00:04:57.417 00:04:57.417 ' 00:04:57.417 07:47:51 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:57.417 07:47:51 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2261360 00:04:57.417 07:47:51 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2261360 00:04:57.417 07:47:51 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.417 07:47:51 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2261360 ']' 00:04:57.417 07:47:51 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.417 07:47:51 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.417 07:47:51 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.417 07:47:51 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.417 07:47:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.677 [2024-11-27 07:47:51.562660] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:04:57.677 [2024-11-27 07:47:51.562713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2261360 ] 00:04:57.677 [2024-11-27 07:47:51.626516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.677 [2024-11-27 07:47:51.667341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.936 07:47:51 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.936 07:47:51 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:57.936 07:47:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:58.195 07:47:52 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2261360 00:04:58.195 07:47:52 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2261360 ']' 00:04:58.195 07:47:52 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2261360 00:04:58.195 07:47:52 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:58.195 07:47:52 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.195 07:47:52 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2261360 00:04:58.195 07:47:52 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.195 07:47:52 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.195 07:47:52 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2261360' 00:04:58.195 killing process with pid 2261360 00:04:58.195 07:47:52 alias_rpc -- common/autotest_common.sh@973 -- # kill 2261360 00:04:58.195 07:47:52 alias_rpc -- common/autotest_common.sh@978 -- # wait 2261360 00:04:58.455 00:04:58.455 real 0m1.098s 00:04:58.455 user 0m1.121s 00:04:58.455 sys 0m0.395s 00:04:58.455 07:47:52 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.455 07:47:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.455 ************************************ 00:04:58.455 END TEST alias_rpc 00:04:58.455 ************************************ 00:04:58.455 07:47:52 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:58.455 07:47:52 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:58.455 07:47:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.455 07:47:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.455 07:47:52 -- common/autotest_common.sh@10 -- # set +x 00:04:58.455 ************************************ 00:04:58.455 START TEST spdkcli_tcp 00:04:58.455 ************************************ 00:04:58.455 07:47:52 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:58.715 * Looking for test storage... 00:04:58.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:58.715 07:47:52 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.715 07:47:52 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.715 07:47:52 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.715 07:47:52 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.715 07:47:52 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:58.715 07:47:52 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.715 07:47:52 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.715 --rc genhtml_branch_coverage=1 00:04:58.715 --rc genhtml_function_coverage=1 00:04:58.715 --rc genhtml_legend=1 00:04:58.715 --rc geninfo_all_blocks=1 00:04:58.715 --rc geninfo_unexecuted_blocks=1 00:04:58.715 00:04:58.715 ' 00:04:58.715 07:47:52 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.715 --rc genhtml_branch_coverage=1 00:04:58.715 --rc genhtml_function_coverage=1 00:04:58.715 --rc genhtml_legend=1 00:04:58.715 --rc geninfo_all_blocks=1 00:04:58.715 --rc geninfo_unexecuted_blocks=1 00:04:58.715 00:04:58.715 ' 00:04:58.715 07:47:52 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.715 --rc genhtml_branch_coverage=1 00:04:58.715 --rc genhtml_function_coverage=1 00:04:58.715 --rc genhtml_legend=1 00:04:58.715 --rc geninfo_all_blocks=1 00:04:58.715 --rc geninfo_unexecuted_blocks=1 00:04:58.715 00:04:58.715 ' 00:04:58.715 07:47:52 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.715 --rc genhtml_branch_coverage=1 00:04:58.715 --rc genhtml_function_coverage=1 00:04:58.715 --rc genhtml_legend=1 00:04:58.715 --rc geninfo_all_blocks=1 00:04:58.715 --rc geninfo_unexecuted_blocks=1 00:04:58.715 00:04:58.715 ' 00:04:58.715 07:47:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:58.715 07:47:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:58.715 07:47:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:58.715 07:47:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:58.715 07:47:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:58.715 07:47:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:58.715 07:47:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:58.715 07:47:52 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.715 07:47:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.715 07:47:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2261573 00:04:58.715 07:47:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2261573 00:04:58.715 07:47:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:58.715 07:47:52 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2261573 ']' 00:04:58.715 07:47:52 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.715 07:47:52 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.715 07:47:52 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.715 07:47:52 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.715 07:47:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.715 [2024-11-27 07:47:52.716657] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:04:58.716 [2024-11-27 07:47:52.716702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2261573 ] 00:04:58.716 [2024-11-27 07:47:52.779516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.975 [2024-11-27 07:47:52.824581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.975 [2024-11-27 07:47:52.824584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.975 07:47:53 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.975 07:47:53 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:58.975 07:47:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2261750 00:04:58.975 07:47:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:58.975 07:47:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:59.235 [ 00:04:59.235 "bdev_malloc_delete", 00:04:59.235 "bdev_malloc_create", 00:04:59.235 "bdev_null_resize", 00:04:59.235 "bdev_null_delete", 00:04:59.235 "bdev_null_create", 00:04:59.235 "bdev_nvme_cuse_unregister", 00:04:59.235 "bdev_nvme_cuse_register", 00:04:59.235 "bdev_opal_new_user", 00:04:59.235 "bdev_opal_set_lock_state", 00:04:59.235 "bdev_opal_delete", 00:04:59.235 "bdev_opal_get_info", 00:04:59.235 "bdev_opal_create", 00:04:59.235 "bdev_nvme_opal_revert", 00:04:59.235 "bdev_nvme_opal_init", 00:04:59.235 "bdev_nvme_send_cmd", 00:04:59.235 "bdev_nvme_set_keys", 00:04:59.235 "bdev_nvme_get_path_iostat", 00:04:59.235 "bdev_nvme_get_mdns_discovery_info", 00:04:59.235 "bdev_nvme_stop_mdns_discovery", 00:04:59.235 "bdev_nvme_start_mdns_discovery", 00:04:59.235 "bdev_nvme_set_multipath_policy", 00:04:59.235 "bdev_nvme_set_preferred_path", 00:04:59.235 "bdev_nvme_get_io_paths", 00:04:59.235 "bdev_nvme_remove_error_injection", 00:04:59.235 "bdev_nvme_add_error_injection", 00:04:59.235 "bdev_nvme_get_discovery_info", 00:04:59.235 "bdev_nvme_stop_discovery", 00:04:59.235 "bdev_nvme_start_discovery", 00:04:59.235 "bdev_nvme_get_controller_health_info", 00:04:59.235 "bdev_nvme_disable_controller", 00:04:59.235 "bdev_nvme_enable_controller", 00:04:59.235 "bdev_nvme_reset_controller", 00:04:59.235 "bdev_nvme_get_transport_statistics", 00:04:59.235 "bdev_nvme_apply_firmware", 00:04:59.235 "bdev_nvme_detach_controller", 00:04:59.235 "bdev_nvme_get_controllers", 00:04:59.235 "bdev_nvme_attach_controller", 00:04:59.235 "bdev_nvme_set_hotplug", 00:04:59.235 "bdev_nvme_set_options", 00:04:59.235 "bdev_passthru_delete", 00:04:59.235 "bdev_passthru_create", 00:04:59.235 "bdev_lvol_set_parent_bdev", 00:04:59.235 "bdev_lvol_set_parent", 00:04:59.235 "bdev_lvol_check_shallow_copy", 00:04:59.235 "bdev_lvol_start_shallow_copy", 00:04:59.235 "bdev_lvol_grow_lvstore", 00:04:59.235 "bdev_lvol_get_lvols", 00:04:59.235 "bdev_lvol_get_lvstores", 00:04:59.235 "bdev_lvol_delete", 00:04:59.235 "bdev_lvol_set_read_only", 00:04:59.235 "bdev_lvol_resize", 00:04:59.235 "bdev_lvol_decouple_parent", 00:04:59.235 "bdev_lvol_inflate", 00:04:59.235 "bdev_lvol_rename", 00:04:59.235 "bdev_lvol_clone_bdev", 00:04:59.235 "bdev_lvol_clone", 00:04:59.235 "bdev_lvol_snapshot", 00:04:59.235 "bdev_lvol_create", 00:04:59.235 "bdev_lvol_delete_lvstore", 00:04:59.235 "bdev_lvol_rename_lvstore", 00:04:59.235 "bdev_lvol_create_lvstore", 00:04:59.235 "bdev_raid_set_options", 00:04:59.235 "bdev_raid_remove_base_bdev", 00:04:59.235 "bdev_raid_add_base_bdev", 00:04:59.235 "bdev_raid_delete", 00:04:59.235 "bdev_raid_create", 00:04:59.235 "bdev_raid_get_bdevs", 00:04:59.235 "bdev_error_inject_error", 00:04:59.235 "bdev_error_delete", 00:04:59.235 "bdev_error_create", 00:04:59.235 "bdev_split_delete", 00:04:59.235 "bdev_split_create", 00:04:59.235 "bdev_delay_delete", 00:04:59.235 "bdev_delay_create", 00:04:59.235 "bdev_delay_update_latency", 00:04:59.235 "bdev_zone_block_delete", 00:04:59.235 "bdev_zone_block_create", 00:04:59.235 "blobfs_create", 00:04:59.235 "blobfs_detect", 00:04:59.235 "blobfs_set_cache_size", 00:04:59.235 "bdev_aio_delete", 00:04:59.235 "bdev_aio_rescan", 00:04:59.235 "bdev_aio_create", 00:04:59.235 "bdev_ftl_set_property", 00:04:59.235 "bdev_ftl_get_properties", 00:04:59.235 "bdev_ftl_get_stats", 00:04:59.235 "bdev_ftl_unmap", 00:04:59.235 "bdev_ftl_unload", 00:04:59.235 "bdev_ftl_delete", 00:04:59.235 "bdev_ftl_load", 00:04:59.235 "bdev_ftl_create", 00:04:59.235 "bdev_virtio_attach_controller", 00:04:59.235 "bdev_virtio_scsi_get_devices", 00:04:59.235 "bdev_virtio_detach_controller", 00:04:59.235 "bdev_virtio_blk_set_hotplug", 00:04:59.235 "bdev_iscsi_delete", 00:04:59.235 "bdev_iscsi_create", 00:04:59.235 "bdev_iscsi_set_options", 00:04:59.235 "accel_error_inject_error", 00:04:59.235 "ioat_scan_accel_module", 00:04:59.235 "dsa_scan_accel_module", 00:04:59.235 "iaa_scan_accel_module", 00:04:59.235 "vfu_virtio_create_fs_endpoint", 00:04:59.235 "vfu_virtio_create_scsi_endpoint", 00:04:59.235 "vfu_virtio_scsi_remove_target", 00:04:59.235 "vfu_virtio_scsi_add_target", 00:04:59.235 "vfu_virtio_create_blk_endpoint", 00:04:59.235 "vfu_virtio_delete_endpoint", 00:04:59.235 "keyring_file_remove_key", 00:04:59.235 "keyring_file_add_key", 00:04:59.235 "keyring_linux_set_options", 00:04:59.235 "fsdev_aio_delete", 00:04:59.235 "fsdev_aio_create", 00:04:59.235 "iscsi_get_histogram", 00:04:59.235 "iscsi_enable_histogram", 00:04:59.235 "iscsi_set_options", 00:04:59.235 "iscsi_get_auth_groups", 00:04:59.235 "iscsi_auth_group_remove_secret", 00:04:59.235 "iscsi_auth_group_add_secret", 00:04:59.235 "iscsi_delete_auth_group", 00:04:59.235 "iscsi_create_auth_group", 00:04:59.235 "iscsi_set_discovery_auth", 00:04:59.235 "iscsi_get_options", 00:04:59.235 "iscsi_target_node_request_logout", 00:04:59.235 "iscsi_target_node_set_redirect", 00:04:59.235 "iscsi_target_node_set_auth", 00:04:59.235 "iscsi_target_node_add_lun", 00:04:59.235 "iscsi_get_stats", 00:04:59.235 "iscsi_get_connections", 00:04:59.235 "iscsi_portal_group_set_auth", 00:04:59.235 "iscsi_start_portal_group", 00:04:59.235 "iscsi_delete_portal_group", 00:04:59.235 "iscsi_create_portal_group", 00:04:59.235 "iscsi_get_portal_groups", 00:04:59.235 "iscsi_delete_target_node", 00:04:59.235 "iscsi_target_node_remove_pg_ig_maps", 00:04:59.235 "iscsi_target_node_add_pg_ig_maps", 00:04:59.235 "iscsi_create_target_node", 00:04:59.235 "iscsi_get_target_nodes", 00:04:59.235 "iscsi_delete_initiator_group", 00:04:59.235 "iscsi_initiator_group_remove_initiators", 00:04:59.235 "iscsi_initiator_group_add_initiators", 00:04:59.235 "iscsi_create_initiator_group", 00:04:59.235 "iscsi_get_initiator_groups", 00:04:59.235 "nvmf_set_crdt", 00:04:59.235 "nvmf_set_config", 00:04:59.235 "nvmf_set_max_subsystems", 00:04:59.235 "nvmf_stop_mdns_prr", 00:04:59.235 "nvmf_publish_mdns_prr", 00:04:59.235 "nvmf_subsystem_get_listeners", 00:04:59.235 "nvmf_subsystem_get_qpairs", 00:04:59.235 "nvmf_subsystem_get_controllers", 00:04:59.235 "nvmf_get_stats", 00:04:59.235 "nvmf_get_transports", 00:04:59.235 "nvmf_create_transport", 00:04:59.235 "nvmf_get_targets", 00:04:59.235 "nvmf_delete_target", 00:04:59.235 "nvmf_create_target", 00:04:59.235 "nvmf_subsystem_allow_any_host", 00:04:59.235 "nvmf_subsystem_set_keys", 00:04:59.235 "nvmf_subsystem_remove_host", 00:04:59.235 "nvmf_subsystem_add_host", 00:04:59.235 "nvmf_ns_remove_host", 00:04:59.235 "nvmf_ns_add_host", 00:04:59.235 "nvmf_subsystem_remove_ns", 00:04:59.235 "nvmf_subsystem_set_ns_ana_group", 00:04:59.235 "nvmf_subsystem_add_ns", 00:04:59.235 "nvmf_subsystem_listener_set_ana_state", 00:04:59.235 "nvmf_discovery_get_referrals", 00:04:59.235 "nvmf_discovery_remove_referral", 00:04:59.235 "nvmf_discovery_add_referral", 00:04:59.235 "nvmf_subsystem_remove_listener", 00:04:59.235 "nvmf_subsystem_add_listener", 00:04:59.235 "nvmf_delete_subsystem", 00:04:59.235 "nvmf_create_subsystem", 00:04:59.235 "nvmf_get_subsystems", 00:04:59.235 "env_dpdk_get_mem_stats", 00:04:59.235 "nbd_get_disks", 00:04:59.235 "nbd_stop_disk", 00:04:59.235 "nbd_start_disk", 00:04:59.235 "ublk_recover_disk", 00:04:59.235 "ublk_get_disks", 00:04:59.235 "ublk_stop_disk", 00:04:59.235 "ublk_start_disk", 00:04:59.235 "ublk_destroy_target", 00:04:59.235 "ublk_create_target", 00:04:59.235 "virtio_blk_create_transport", 00:04:59.235 "virtio_blk_get_transports", 00:04:59.235 "vhost_controller_set_coalescing", 00:04:59.235 "vhost_get_controllers", 00:04:59.235 "vhost_delete_controller", 00:04:59.235 "vhost_create_blk_controller", 00:04:59.235 "vhost_scsi_controller_remove_target", 00:04:59.235 "vhost_scsi_controller_add_target", 00:04:59.235 "vhost_start_scsi_controller", 00:04:59.235 "vhost_create_scsi_controller", 00:04:59.235 "thread_set_cpumask", 00:04:59.235 "scheduler_set_options", 00:04:59.235 "framework_get_governor", 00:04:59.235 "framework_get_scheduler", 00:04:59.235 "framework_set_scheduler", 00:04:59.235 "framework_get_reactors", 00:04:59.235 "thread_get_io_channels", 00:04:59.235 "thread_get_pollers", 00:04:59.235 "thread_get_stats", 00:04:59.235 "framework_monitor_context_switch", 00:04:59.235 "spdk_kill_instance", 00:04:59.235 "log_enable_timestamps", 00:04:59.235 "log_get_flags", 00:04:59.235 "log_clear_flag", 00:04:59.235 "log_set_flag", 00:04:59.235 "log_get_level", 00:04:59.235 "log_set_level", 00:04:59.235 "log_get_print_level", 00:04:59.235 "log_set_print_level", 00:04:59.235 "framework_enable_cpumask_locks", 00:04:59.235 "framework_disable_cpumask_locks", 00:04:59.235 "framework_wait_init", 00:04:59.235 "framework_start_init", 00:04:59.235 "scsi_get_devices", 00:04:59.235 "bdev_get_histogram", 00:04:59.235 "bdev_enable_histogram", 00:04:59.235 "bdev_set_qos_limit", 00:04:59.235 "bdev_set_qd_sampling_period", 00:04:59.236 "bdev_get_bdevs", 00:04:59.236 "bdev_reset_iostat", 00:04:59.236 "bdev_get_iostat", 00:04:59.236 "bdev_examine", 00:04:59.236 "bdev_wait_for_examine", 00:04:59.236 "bdev_set_options", 00:04:59.236 "accel_get_stats", 00:04:59.236 "accel_set_options", 00:04:59.236 "accel_set_driver", 00:04:59.236 "accel_crypto_key_destroy", 00:04:59.236 "accel_crypto_keys_get", 00:04:59.236 "accel_crypto_key_create", 00:04:59.236 "accel_assign_opc", 00:04:59.236 "accel_get_module_info", 00:04:59.236 "accel_get_opc_assignments", 00:04:59.236 "vmd_rescan", 00:04:59.236 "vmd_remove_device", 00:04:59.236 "vmd_enable", 00:04:59.236 "sock_get_default_impl", 00:04:59.236 "sock_set_default_impl", 00:04:59.236 "sock_impl_set_options", 00:04:59.236 "sock_impl_get_options", 00:04:59.236 "iobuf_get_stats", 00:04:59.236 "iobuf_set_options", 00:04:59.236 "keyring_get_keys", 00:04:59.236 "vfu_tgt_set_base_path", 00:04:59.236 "framework_get_pci_devices", 00:04:59.236 "framework_get_config", 00:04:59.236 "framework_get_subsystems", 00:04:59.236 "fsdev_set_opts", 00:04:59.236 "fsdev_get_opts", 00:04:59.236 "trace_get_info", 00:04:59.236 "trace_get_tpoint_group_mask", 00:04:59.236 "trace_disable_tpoint_group", 00:04:59.236 "trace_enable_tpoint_group", 00:04:59.236 "trace_clear_tpoint_mask", 00:04:59.236 "trace_set_tpoint_mask", 00:04:59.236 "notify_get_notifications", 00:04:59.236 "notify_get_types", 00:04:59.236 "spdk_get_version", 00:04:59.236 "rpc_get_methods" 00:04:59.236 ] 00:04:59.236 07:47:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:59.236 07:47:53 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:59.236 07:47:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.236 07:47:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:59.236 07:47:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2261573 00:04:59.236 07:47:53 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2261573 ']' 00:04:59.236 07:47:53 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2261573 00:04:59.236 07:47:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:59.236 07:47:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.236 07:47:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2261573 00:04:59.236 07:47:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.236 07:47:53 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.236 07:47:53 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2261573' 00:04:59.236 killing process with pid 2261573 00:04:59.236 07:47:53 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2261573 00:04:59.236 07:47:53 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2261573 00:04:59.805 00:04:59.805 real 0m1.117s 00:04:59.805 user 0m1.894s 00:04:59.805 sys 0m0.425s 00:04:59.805 07:47:53 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.805 07:47:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.805 ************************************ 00:04:59.805 END TEST spdkcli_tcp 00:04:59.805 ************************************ 00:04:59.805 07:47:53 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:59.805 07:47:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.805 07:47:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.805 07:47:53 -- common/autotest_common.sh@10 -- # set +x 00:04:59.805 ************************************ 00:04:59.805 START TEST dpdk_mem_utility 00:04:59.805 ************************************ 00:04:59.805 07:47:53 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:59.805 * Looking for test storage... 00:04:59.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:59.805 07:47:53 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.805 07:47:53 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.805 07:47:53 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.805 07:47:53 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.805 07:47:53 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.805 07:47:53 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.805 07:47:53 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.805 07:47:53 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.805 07:47:53 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.805 07:47:53 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.805 07:47:53 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.806 07:47:53 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:59.806 07:47:53 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.806 07:47:53 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.806 --rc genhtml_branch_coverage=1 00:04:59.806 --rc genhtml_function_coverage=1 00:04:59.806 --rc genhtml_legend=1 00:04:59.806 --rc geninfo_all_blocks=1 00:04:59.806 --rc geninfo_unexecuted_blocks=1 00:04:59.806 00:04:59.806 ' 00:04:59.806 07:47:53 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.806 --rc genhtml_branch_coverage=1 00:04:59.806 --rc genhtml_function_coverage=1 00:04:59.806 --rc genhtml_legend=1 00:04:59.806 --rc geninfo_all_blocks=1 00:04:59.806 --rc geninfo_unexecuted_blocks=1 00:04:59.806 00:04:59.806 ' 00:04:59.806 07:47:53 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.806 --rc genhtml_branch_coverage=1 00:04:59.806 --rc genhtml_function_coverage=1 00:04:59.806 --rc genhtml_legend=1 00:04:59.806 --rc geninfo_all_blocks=1 00:04:59.806 --rc geninfo_unexecuted_blocks=1 00:04:59.806 00:04:59.806 ' 00:04:59.806 07:47:53 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.806 --rc genhtml_branch_coverage=1 00:04:59.806 --rc genhtml_function_coverage=1 00:04:59.806 --rc genhtml_legend=1 00:04:59.806 --rc geninfo_all_blocks=1 00:04:59.806 --rc geninfo_unexecuted_blocks=1 00:04:59.806 00:04:59.806 ' 00:04:59.806 07:47:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:59.806 07:47:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2261877 00:04:59.806 07:47:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2261877 00:04:59.806 07:47:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.806 07:47:53 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2261877 ']' 00:04:59.806 07:47:53 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.806 07:47:53 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.806 07:47:53 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.806 07:47:53 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.806 07:47:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.064 [2024-11-27 07:47:53.918264] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:00.064 [2024-11-27 07:47:53.918314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2261877 ] 00:05:00.064 [2024-11-27 07:47:53.979848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.064 [2024-11-27 07:47:54.019540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.324 07:47:54 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.324 07:47:54 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:00.324 07:47:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:00.324 07:47:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:00.324 07:47:54 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.324 07:47:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.324 { 00:05:00.324 "filename": "/tmp/spdk_mem_dump.txt" 00:05:00.324 } 00:05:00.324 07:47:54 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.324 07:47:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:00.324 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:00.324 1 heaps totaling size 818.000000 MiB 00:05:00.324 size: 818.000000 MiB heap id: 0 00:05:00.324 end heaps---------- 00:05:00.324 9 mempools totaling size 603.782043 MiB 00:05:00.324 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:00.324 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:00.324 size: 100.555481 MiB name: bdev_io_2261877 00:05:00.324 size: 50.003479 MiB name: msgpool_2261877 00:05:00.324 size: 36.509338 MiB name: fsdev_io_2261877 00:05:00.324 size: 21.763794 MiB name: PDU_Pool 00:05:00.324 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:00.324 size: 4.133484 MiB name: evtpool_2261877 00:05:00.324 size: 0.026123 MiB name: Session_Pool 00:05:00.324 end mempools------- 00:05:00.324 6 memzones totaling size 4.142822 MiB 00:05:00.324 size: 1.000366 MiB name: RG_ring_0_2261877 00:05:00.324 size: 1.000366 MiB name: RG_ring_1_2261877 00:05:00.324 size: 1.000366 MiB name: RG_ring_4_2261877 00:05:00.325 size: 1.000366 MiB name: RG_ring_5_2261877 00:05:00.325 size: 0.125366 MiB name: RG_ring_2_2261877 00:05:00.325 size: 0.015991 MiB name: RG_ring_3_2261877 00:05:00.325 end memzones------- 00:05:00.325 07:47:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:00.325 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:00.325 list of free elements. size: 10.852478 MiB 00:05:00.325 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:00.325 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:00.325 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:00.325 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:00.325 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:00.325 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:00.325 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:00.325 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:00.325 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:00.325 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:00.325 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:00.325 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:00.325 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:00.325 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:00.325 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:00.325 list of standard malloc elements. size: 199.218628 MiB 00:05:00.325 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:00.325 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:00.325 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:00.325 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:00.325 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:00.325 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:00.325 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:00.325 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:00.325 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:00.325 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:00.325 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:00.325 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:00.325 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:00.325 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:00.325 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:00.325 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:00.325 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:00.325 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:00.325 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:00.325 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:00.325 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:00.325 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:00.325 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:00.325 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:00.325 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:00.325 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:00.325 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:00.325 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:00.325 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:00.325 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:00.325 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:00.325 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:00.325 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:00.325 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:00.325 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:00.325 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:00.325 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:00.325 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:00.325 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:00.325 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:00.325 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:00.325 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:00.325 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:00.325 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:00.325 list of memzone associated elements. size: 607.928894 MiB 00:05:00.325 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:00.325 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:00.325 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:00.325 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:00.325 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:00.325 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2261877_0 00:05:00.325 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:00.325 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2261877_0 00:05:00.325 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:00.325 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2261877_0 00:05:00.325 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:00.325 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:00.325 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:00.325 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:00.325 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:00.325 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2261877_0 00:05:00.325 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:00.325 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2261877 00:05:00.325 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:00.325 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2261877 00:05:00.325 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:00.325 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:00.325 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:00.325 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:00.325 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:00.325 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:00.325 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:00.325 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:00.325 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:00.325 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2261877 00:05:00.325 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:00.325 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2261877 00:05:00.325 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:00.325 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2261877 00:05:00.325 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:00.325 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2261877 00:05:00.325 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:00.325 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2261877 00:05:00.325 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:00.325 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2261877 00:05:00.325 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:00.326 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:00.326 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:00.326 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:00.326 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:00.326 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:00.326 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:00.326 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2261877 00:05:00.326 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:00.326 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2261877 00:05:00.326 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:00.326 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:00.326 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:00.326 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:00.326 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:00.326 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2261877 00:05:00.326 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:00.326 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:00.326 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:00.326 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2261877 00:05:00.326 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:00.326 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2261877 00:05:00.326 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:00.326 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2261877 00:05:00.326 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:00.326 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:00.326 07:47:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:00.326 07:47:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2261877 00:05:00.326 07:47:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2261877 ']' 00:05:00.326 07:47:54 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2261877 00:05:00.326 07:47:54 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:00.326 07:47:54 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.326 07:47:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2261877 00:05:00.326 07:47:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.326 07:47:54 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.326 07:47:54 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2261877' 00:05:00.326 killing process with pid 2261877 00:05:00.326 07:47:54 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2261877 00:05:00.326 07:47:54 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2261877 00:05:00.585 00:05:00.585 real 0m0.988s 00:05:00.585 user 0m0.919s 00:05:00.585 sys 0m0.390s 00:05:00.585 07:47:54 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.585 07:47:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.585 ************************************ 00:05:00.585 END TEST dpdk_mem_utility 00:05:00.585 ************************************ 00:05:00.844 07:47:54 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:00.844 07:47:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.844 07:47:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.844 07:47:54 -- common/autotest_common.sh@10 -- # set +x 00:05:00.844 ************************************ 00:05:00.844 START TEST event 00:05:00.844 ************************************ 00:05:00.844 07:47:54 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:00.844 * Looking for test storage... 00:05:00.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:00.844 07:47:54 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.844 07:47:54 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.844 07:47:54 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.845 07:47:54 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.845 07:47:54 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.845 07:47:54 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.845 07:47:54 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.845 07:47:54 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.845 07:47:54 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.845 07:47:54 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.845 07:47:54 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.845 07:47:54 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.845 07:47:54 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.845 07:47:54 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.845 07:47:54 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.845 07:47:54 event -- scripts/common.sh@344 -- # case "$op" in 00:05:00.845 07:47:54 event -- scripts/common.sh@345 -- # : 1 00:05:00.845 07:47:54 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.845 07:47:54 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.845 07:47:54 event -- scripts/common.sh@365 -- # decimal 1 00:05:00.845 07:47:54 event -- scripts/common.sh@353 -- # local d=1 00:05:00.845 07:47:54 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.845 07:47:54 event -- scripts/common.sh@355 -- # echo 1 00:05:00.845 07:47:54 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.845 07:47:54 event -- scripts/common.sh@366 -- # decimal 2 00:05:00.845 07:47:54 event -- scripts/common.sh@353 -- # local d=2 00:05:00.845 07:47:54 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.845 07:47:54 event -- scripts/common.sh@355 -- # echo 2 00:05:00.845 07:47:54 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.845 07:47:54 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.845 07:47:54 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.845 07:47:54 event -- scripts/common.sh@368 -- # return 0 00:05:00.845 07:47:54 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.845 07:47:54 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.845 --rc genhtml_branch_coverage=1 00:05:00.845 --rc genhtml_function_coverage=1 00:05:00.845 --rc genhtml_legend=1 00:05:00.845 --rc geninfo_all_blocks=1 00:05:00.845 --rc geninfo_unexecuted_blocks=1 00:05:00.845 00:05:00.845 ' 00:05:00.845 07:47:54 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.845 --rc genhtml_branch_coverage=1 00:05:00.845 --rc genhtml_function_coverage=1 00:05:00.845 --rc genhtml_legend=1 00:05:00.845 --rc geninfo_all_blocks=1 00:05:00.845 --rc geninfo_unexecuted_blocks=1 00:05:00.845 00:05:00.845 ' 00:05:00.845 07:47:54 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.845 --rc genhtml_branch_coverage=1 00:05:00.845 --rc genhtml_function_coverage=1 00:05:00.845 --rc genhtml_legend=1 00:05:00.845 --rc geninfo_all_blocks=1 00:05:00.845 --rc geninfo_unexecuted_blocks=1 00:05:00.845 00:05:00.845 ' 00:05:00.845 07:47:54 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.845 --rc genhtml_branch_coverage=1 00:05:00.845 --rc genhtml_function_coverage=1 00:05:00.845 --rc genhtml_legend=1 00:05:00.845 --rc geninfo_all_blocks=1 00:05:00.845 --rc geninfo_unexecuted_blocks=1 00:05:00.845 00:05:00.845 ' 00:05:00.845 07:47:54 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:00.845 07:47:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:00.845 07:47:54 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:00.845 07:47:54 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:00.845 07:47:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.845 07:47:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.845 ************************************ 00:05:00.845 START TEST event_perf 00:05:00.845 ************************************ 00:05:00.845 07:47:54 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:01.104 Running I/O for 1 seconds...[2024-11-27 07:47:54.962557] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:01.104 [2024-11-27 07:47:54.962614] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2262167 ] 00:05:01.104 [2024-11-27 07:47:55.026174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:01.104 [2024-11-27 07:47:55.070476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.104 [2024-11-27 07:47:55.070573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.104 [2024-11-27 07:47:55.070675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.104 [2024-11-27 07:47:55.070676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.038 Running I/O for 1 seconds... 00:05:02.038 lcore 0: 205683 00:05:02.038 lcore 1: 205682 00:05:02.038 lcore 2: 205683 00:05:02.038 lcore 3: 205684 00:05:02.038 done. 00:05:02.038 00:05:02.038 real 0m1.162s 00:05:02.038 user 0m4.092s 00:05:02.038 sys 0m0.065s 00:05:02.038 07:47:56 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.038 07:47:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:02.038 ************************************ 00:05:02.038 END TEST event_perf 00:05:02.038 ************************************ 00:05:02.038 07:47:56 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:02.038 07:47:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:02.038 07:47:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.038 07:47:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.297 ************************************ 00:05:02.297 START TEST event_reactor 00:05:02.297 ************************************ 00:05:02.297 07:47:56 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:02.297 [2024-11-27 07:47:56.195350] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:02.297 [2024-11-27 07:47:56.195416] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2262420 ] 00:05:02.297 [2024-11-27 07:47:56.260091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.297 [2024-11-27 07:47:56.300372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.233 test_start 00:05:03.233 oneshot 00:05:03.233 tick 100 00:05:03.233 tick 100 00:05:03.233 tick 250 00:05:03.233 tick 100 00:05:03.233 tick 100 00:05:03.233 tick 100 00:05:03.233 tick 250 00:05:03.233 tick 500 00:05:03.233 tick 100 00:05:03.233 tick 100 00:05:03.233 tick 250 00:05:03.233 tick 100 00:05:03.233 tick 100 00:05:03.233 test_end 00:05:03.233 00:05:03.233 real 0m1.164s 00:05:03.233 user 0m1.098s 00:05:03.233 sys 0m0.062s 00:05:03.233 07:47:57 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.492 07:47:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:03.492 ************************************ 00:05:03.492 END TEST event_reactor 00:05:03.492 ************************************ 00:05:03.492 07:47:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:03.492 07:47:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:03.492 07:47:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.492 07:47:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.492 ************************************ 00:05:03.492 START TEST event_reactor_perf 00:05:03.492 ************************************ 00:05:03.492 07:47:57 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:03.492 [2024-11-27 07:47:57.419226] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:03.492 [2024-11-27 07:47:57.419294] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2262671 ] 00:05:03.492 [2024-11-27 07:47:57.482757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.492 [2024-11-27 07:47:57.522744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.870 test_start 00:05:04.870 test_end 00:05:04.870 Performance: 498142 events per second 00:05:04.870 00:05:04.870 real 0m1.164s 00:05:04.870 user 0m1.098s 00:05:04.870 sys 0m0.062s 00:05:04.870 07:47:58 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.870 07:47:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:04.870 ************************************ 00:05:04.870 END TEST event_reactor_perf 00:05:04.870 ************************************ 00:05:04.870 07:47:58 event -- event/event.sh@49 -- # uname -s 00:05:04.870 07:47:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:04.870 07:47:58 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:04.870 07:47:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.870 07:47:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.870 07:47:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.870 ************************************ 00:05:04.870 START TEST event_scheduler 00:05:04.870 ************************************ 00:05:04.870 07:47:58 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:04.870 * Looking for test storage... 00:05:04.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:04.870 07:47:58 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.870 07:47:58 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.870 07:47:58 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.870 07:47:58 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:04.870 07:47:58 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.871 07:47:58 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:04.871 07:47:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:04.871 07:47:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.871 07:47:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:04.871 07:47:58 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.871 07:47:58 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.871 07:47:58 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.871 07:47:58 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:04.871 07:47:58 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.871 07:47:58 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.871 --rc genhtml_branch_coverage=1 00:05:04.871 --rc genhtml_function_coverage=1 00:05:04.871 --rc genhtml_legend=1 00:05:04.871 --rc geninfo_all_blocks=1 00:05:04.871 --rc geninfo_unexecuted_blocks=1 00:05:04.871 00:05:04.871 ' 00:05:04.871 07:47:58 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.871 --rc genhtml_branch_coverage=1 00:05:04.871 --rc genhtml_function_coverage=1 00:05:04.871 --rc genhtml_legend=1 00:05:04.871 --rc geninfo_all_blocks=1 00:05:04.871 --rc geninfo_unexecuted_blocks=1 00:05:04.871 00:05:04.871 ' 00:05:04.871 07:47:58 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.871 --rc genhtml_branch_coverage=1 00:05:04.871 --rc genhtml_function_coverage=1 00:05:04.871 --rc genhtml_legend=1 00:05:04.871 --rc geninfo_all_blocks=1 00:05:04.871 --rc geninfo_unexecuted_blocks=1 00:05:04.871 00:05:04.871 ' 00:05:04.871 07:47:58 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.871 --rc genhtml_branch_coverage=1 00:05:04.871 --rc genhtml_function_coverage=1 00:05:04.871 --rc genhtml_legend=1 00:05:04.871 --rc geninfo_all_blocks=1 00:05:04.871 --rc geninfo_unexecuted_blocks=1 00:05:04.871 00:05:04.871 ' 00:05:04.871 07:47:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:04.871 07:47:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2262952 00:05:04.871 07:47:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:04.871 07:47:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.871 07:47:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2262952 00:05:04.871 07:47:58 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2262952 ']' 00:05:04.871 07:47:58 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.871 07:47:58 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.871 07:47:58 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.871 07:47:58 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.871 07:47:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:04.871 [2024-11-27 07:47:58.842869] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:04.871 [2024-11-27 07:47:58.842921] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2262952 ] 00:05:04.871 [2024-11-27 07:47:58.902338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:04.871 [2024-11-27 07:47:58.945694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.871 [2024-11-27 07:47:58.945776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.871 [2024-11-27 07:47:58.945863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.871 [2024-11-27 07:47:58.945865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.131 07:47:58 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.131 07:47:58 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:05.131 07:47:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:05.131 07:47:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.131 07:47:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.131 [2024-11-27 07:47:59.006435] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:05.131 [2024-11-27 07:47:59.006453] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:05.131 [2024-11-27 07:47:59.006462] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:05.131 [2024-11-27 07:47:59.006468] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:05.131 [2024-11-27 07:47:59.006473] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:05.131 07:47:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.131 07:47:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:05.131 07:47:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.131 07:47:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.131 [2024-11-27 07:47:59.085746] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:05.131 07:47:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.131 07:47:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:05.131 07:47:59 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.131 07:47:59 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.131 07:47:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.131 ************************************ 00:05:05.131 START TEST scheduler_create_thread 00:05:05.131 ************************************ 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.131 2 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.131 3 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.131 4 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.131 5 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.131 6 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.131 7 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.131 8 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.131 9 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.131 10 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.131 07:47:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.067 07:48:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.067 07:48:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:06.067 07:48:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.067 07:48:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.442 07:48:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.442 07:48:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:07.442 07:48:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:07.442 07:48:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.442 07:48:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.817 07:48:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.817 00:05:08.817 real 0m3.383s 00:05:08.817 user 0m0.023s 00:05:08.817 sys 0m0.005s 00:05:08.817 07:48:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.817 07:48:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.817 ************************************ 00:05:08.817 END TEST scheduler_create_thread 00:05:08.817 ************************************ 00:05:08.817 07:48:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:08.817 07:48:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2262952 00:05:08.817 07:48:02 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2262952 ']' 00:05:08.817 07:48:02 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2262952 00:05:08.817 07:48:02 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:08.817 07:48:02 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.817 07:48:02 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2262952 00:05:08.817 07:48:02 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:08.817 07:48:02 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:08.817 07:48:02 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2262952' 00:05:08.817 killing process with pid 2262952 00:05:08.817 07:48:02 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2262952 00:05:08.817 07:48:02 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2262952 00:05:08.817 [2024-11-27 07:48:02.886065] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:09.076 00:05:09.076 real 0m4.462s 00:05:09.076 user 0m7.853s 00:05:09.076 sys 0m0.362s 00:05:09.076 07:48:03 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.076 07:48:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.076 ************************************ 00:05:09.076 END TEST event_scheduler 00:05:09.076 ************************************ 00:05:09.076 07:48:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:09.076 07:48:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:09.076 07:48:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.076 07:48:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.076 07:48:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.076 ************************************ 00:05:09.076 START TEST app_repeat 00:05:09.076 ************************************ 00:05:09.076 07:48:03 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:09.076 07:48:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.076 07:48:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.076 07:48:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:09.076 07:48:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.076 07:48:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:09.076 07:48:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:09.076 07:48:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:09.076 07:48:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2263822 00:05:09.076 07:48:03 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:09.076 07:48:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.076 07:48:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2263822' 00:05:09.076 Process app_repeat pid: 2263822 00:05:09.076 07:48:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.076 07:48:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:09.076 spdk_app_start Round 0 00:05:09.076 07:48:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2263822 /var/tmp/spdk-nbd.sock 00:05:09.076 07:48:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2263822 ']' 00:05:09.076 07:48:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.076 07:48:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.076 07:48:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.076 07:48:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.076 07:48:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.334 [2024-11-27 07:48:03.203081] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:09.334 [2024-11-27 07:48:03.203140] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2263822 ] 00:05:09.334 [2024-11-27 07:48:03.267664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.335 [2024-11-27 07:48:03.312101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.335 [2024-11-27 07:48:03.312105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.335 07:48:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.335 07:48:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:09.335 07:48:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.592 Malloc0 00:05:09.592 07:48:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.850 Malloc1 00:05:09.850 07:48:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.850 07:48:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.850 07:48:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.850 07:48:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.850 07:48:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.850 07:48:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.850 07:48:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.850 07:48:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.850 07:48:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.850 07:48:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.850 07:48:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.850 07:48:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.850 07:48:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:09.850 07:48:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.850 07:48:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.850 07:48:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.108 /dev/nbd0 00:05:10.108 07:48:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.108 07:48:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.108 07:48:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:10.108 07:48:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:10.108 07:48:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:10.108 07:48:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:10.108 07:48:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:10.108 07:48:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:10.108 07:48:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:10.108 07:48:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:10.108 07:48:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.108 1+0 records in 00:05:10.108 1+0 records out 00:05:10.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001945 s, 21.1 MB/s 00:05:10.108 07:48:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.108 07:48:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:10.108 07:48:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.108 07:48:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:10.108 07:48:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:10.108 07:48:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.108 07:48:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.108 07:48:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.366 /dev/nbd1 00:05:10.366 07:48:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.366 07:48:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.366 07:48:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:10.366 07:48:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:10.366 07:48:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:10.366 07:48:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:10.366 07:48:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:10.366 07:48:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:10.366 07:48:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:10.366 07:48:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:10.366 07:48:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.366 1+0 records in 00:05:10.366 1+0 records out 00:05:10.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209282 s, 19.6 MB/s 00:05:10.366 07:48:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.366 07:48:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:10.366 07:48:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.366 07:48:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:10.366 07:48:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:10.366 07:48:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.366 07:48:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.366 07:48:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.366 07:48:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.366 07:48:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.624 07:48:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.624 { 00:05:10.624 "nbd_device": "/dev/nbd0", 00:05:10.624 "bdev_name": "Malloc0" 00:05:10.624 }, 00:05:10.624 { 00:05:10.624 "nbd_device": "/dev/nbd1", 00:05:10.624 "bdev_name": "Malloc1" 00:05:10.624 } 00:05:10.624 ]' 00:05:10.624 07:48:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.624 { 00:05:10.624 "nbd_device": "/dev/nbd0", 00:05:10.624 "bdev_name": "Malloc0" 00:05:10.624 }, 00:05:10.624 { 00:05:10.624 "nbd_device": "/dev/nbd1", 00:05:10.624 "bdev_name": "Malloc1" 00:05:10.624 } 00:05:10.624 ]' 00:05:10.624 07:48:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.624 07:48:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.624 /dev/nbd1' 00:05:10.624 07:48:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.624 /dev/nbd1' 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.625 256+0 records in 00:05:10.625 256+0 records out 00:05:10.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101041 s, 104 MB/s 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.625 256+0 records in 00:05:10.625 256+0 records out 00:05:10.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134296 s, 78.1 MB/s 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.625 256+0 records in 00:05:10.625 256+0 records out 00:05:10.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148795 s, 70.5 MB/s 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.625 07:48:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.883 07:48:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.883 07:48:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.883 07:48:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.883 07:48:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.883 07:48:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.883 07:48:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.883 07:48:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.883 07:48:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.883 07:48:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.883 07:48:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:11.141 07:48:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:11.141 07:48:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:11.141 07:48:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:11.141 07:48:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.141 07:48:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.141 07:48:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:11.141 07:48:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.141 07:48:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.141 07:48:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.141 07:48:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.141 07:48:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.141 07:48:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:11.141 07:48:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:11.141 07:48:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.399 07:48:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:11.399 07:48:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:11.399 07:48:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.399 07:48:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:11.399 07:48:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:11.399 07:48:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:11.399 07:48:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:11.399 07:48:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:11.399 07:48:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:11.399 07:48:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.399 07:48:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:11.658 [2024-11-27 07:48:05.648361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.658 [2024-11-27 07:48:05.685638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.658 [2024-11-27 07:48:05.685641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.658 [2024-11-27 07:48:05.726564] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.658 [2024-11-27 07:48:05.726611] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:14.956 07:48:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:14.956 07:48:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:14.956 spdk_app_start Round 1 00:05:14.956 07:48:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2263822 /var/tmp/spdk-nbd.sock 00:05:14.956 07:48:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2263822 ']' 00:05:14.956 07:48:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.956 07:48:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.956 07:48:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.956 07:48:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.956 07:48:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.956 07:48:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.956 07:48:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.956 07:48:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.956 Malloc0 00:05:14.956 07:48:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.214 Malloc1 00:05:15.215 07:48:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.215 07:48:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.215 07:48:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.215 07:48:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.215 07:48:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.215 07:48:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.215 07:48:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.215 07:48:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.215 07:48:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.215 07:48:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.215 07:48:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.215 07:48:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.215 07:48:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.215 07:48:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.215 07:48:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.215 07:48:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.215 /dev/nbd0 00:05:15.215 07:48:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.215 07:48:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.215 07:48:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:15.215 07:48:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.215 07:48:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.215 07:48:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.215 07:48:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.474 1+0 records in 00:05:15.474 1+0 records out 00:05:15.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000132187 s, 31.0 MB/s 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.474 07:48:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.474 07:48:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.474 07:48:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.474 /dev/nbd1 00:05:15.474 07:48:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.474 07:48:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.474 1+0 records in 00:05:15.474 1+0 records out 00:05:15.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218306 s, 18.8 MB/s 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.474 07:48:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.732 07:48:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.732 07:48:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.732 { 00:05:15.732 "nbd_device": "/dev/nbd0", 00:05:15.732 "bdev_name": "Malloc0" 00:05:15.732 }, 00:05:15.732 { 00:05:15.732 "nbd_device": "/dev/nbd1", 00:05:15.732 "bdev_name": "Malloc1" 00:05:15.732 } 00:05:15.732 ]' 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.732 { 00:05:15.732 "nbd_device": "/dev/nbd0", 00:05:15.732 "bdev_name": "Malloc0" 00:05:15.732 }, 00:05:15.732 { 00:05:15.732 "nbd_device": "/dev/nbd1", 00:05:15.732 "bdev_name": "Malloc1" 00:05:15.732 } 00:05:15.732 ]' 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.732 /dev/nbd1' 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.732 /dev/nbd1' 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.732 07:48:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.991 256+0 records in 00:05:15.991 256+0 records out 00:05:15.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00380221 s, 276 MB/s 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.991 256+0 records in 00:05:15.991 256+0 records out 00:05:15.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132989 s, 78.8 MB/s 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.991 256+0 records in 00:05:15.991 256+0 records out 00:05:15.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142419 s, 73.6 MB/s 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.991 07:48:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.991 07:48:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.250 07:48:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.508 07:48:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.508 07:48:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.508 07:48:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.508 07:48:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.508 07:48:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.508 07:48:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.508 07:48:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.508 07:48:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.508 07:48:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.508 07:48:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.508 07:48:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.508 07:48:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.508 07:48:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.766 07:48:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:17.024 [2024-11-27 07:48:10.930925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.025 [2024-11-27 07:48:10.971257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.025 [2024-11-27 07:48:10.971259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.025 [2024-11-27 07:48:11.013363] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.025 [2024-11-27 07:48:11.013401] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.317 07:48:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:20.317 07:48:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:20.317 spdk_app_start Round 2 00:05:20.317 07:48:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2263822 /var/tmp/spdk-nbd.sock 00:05:20.317 07:48:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2263822 ']' 00:05:20.317 07:48:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.317 07:48:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.317 07:48:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.317 07:48:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.317 07:48:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.317 07:48:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.317 07:48:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:20.317 07:48:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.317 Malloc0 00:05:20.317 07:48:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.317 Malloc1 00:05:20.317 07:48:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.317 07:48:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.317 07:48:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.317 07:48:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:20.317 07:48:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.317 07:48:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:20.317 07:48:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.317 07:48:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.317 07:48:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.317 07:48:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:20.317 07:48:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.317 07:48:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:20.317 07:48:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:20.317 07:48:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:20.317 07:48:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.317 07:48:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:20.575 /dev/nbd0 00:05:20.575 07:48:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:20.575 07:48:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:20.575 07:48:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:20.575 07:48:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.575 07:48:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.575 07:48:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.575 07:48:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:20.575 07:48:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.575 07:48:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.575 07:48:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.575 07:48:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.575 1+0 records in 00:05:20.575 1+0 records out 00:05:20.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183595 s, 22.3 MB/s 00:05:20.575 07:48:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.576 07:48:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.576 07:48:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.576 07:48:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.576 07:48:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.576 07:48:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.576 07:48:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.576 07:48:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.834 /dev/nbd1 00:05:20.834 07:48:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.834 07:48:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.834 07:48:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:20.834 07:48:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.834 07:48:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.834 07:48:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.834 07:48:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:20.834 07:48:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.834 07:48:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.834 07:48:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.834 07:48:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.834 1+0 records in 00:05:20.834 1+0 records out 00:05:20.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000148629 s, 27.6 MB/s 00:05:20.834 07:48:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.834 07:48:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.834 07:48:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.834 07:48:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.834 07:48:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.834 07:48:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.834 07:48:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.834 07:48:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.834 07:48:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.834 07:48:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:21.093 { 00:05:21.093 "nbd_device": "/dev/nbd0", 00:05:21.093 "bdev_name": "Malloc0" 00:05:21.093 }, 00:05:21.093 { 00:05:21.093 "nbd_device": "/dev/nbd1", 00:05:21.093 "bdev_name": "Malloc1" 00:05:21.093 } 00:05:21.093 ]' 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:21.093 { 00:05:21.093 "nbd_device": "/dev/nbd0", 00:05:21.093 "bdev_name": "Malloc0" 00:05:21.093 }, 00:05:21.093 { 00:05:21.093 "nbd_device": "/dev/nbd1", 00:05:21.093 "bdev_name": "Malloc1" 00:05:21.093 } 00:05:21.093 ]' 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:21.093 /dev/nbd1' 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:21.093 /dev/nbd1' 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:21.093 256+0 records in 00:05:21.093 256+0 records out 00:05:21.093 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106918 s, 98.1 MB/s 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:21.093 256+0 records in 00:05:21.093 256+0 records out 00:05:21.093 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142104 s, 73.8 MB/s 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:21.093 256+0 records in 00:05:21.093 256+0 records out 00:05:21.093 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014935 s, 70.2 MB/s 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.093 07:48:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:21.351 07:48:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:21.351 07:48:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:21.351 07:48:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:21.351 07:48:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.351 07:48:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.351 07:48:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:21.351 07:48:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.351 07:48:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.351 07:48:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.351 07:48:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:21.610 07:48:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:21.610 07:48:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:21.610 07:48:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:21.610 07:48:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.610 07:48:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.610 07:48:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:21.610 07:48:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.610 07:48:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.610 07:48:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.610 07:48:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.610 07:48:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.868 07:48:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.868 07:48:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.868 07:48:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.869 07:48:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.869 07:48:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.869 07:48:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.869 07:48:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:21.869 07:48:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.869 07:48:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.869 07:48:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.869 07:48:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.869 07:48:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.869 07:48:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:22.127 07:48:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:22.127 [2024-11-27 07:48:16.168656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.127 [2024-11-27 07:48:16.206243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.127 [2024-11-27 07:48:16.206246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.386 [2024-11-27 07:48:16.247448] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:22.386 [2024-11-27 07:48:16.247486] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:24.918 07:48:19 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2263822 /var/tmp/spdk-nbd.sock 00:05:24.918 07:48:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2263822 ']' 00:05:24.918 07:48:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.918 07:48:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.918 07:48:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.918 07:48:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.918 07:48:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.176 07:48:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.176 07:48:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:25.176 07:48:19 event.app_repeat -- event/event.sh@39 -- # killprocess 2263822 00:05:25.176 07:48:19 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2263822 ']' 00:05:25.176 07:48:19 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2263822 00:05:25.176 07:48:19 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:25.176 07:48:19 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.176 07:48:19 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2263822 00:05:25.176 07:48:19 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.176 07:48:19 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.176 07:48:19 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2263822' 00:05:25.177 killing process with pid 2263822 00:05:25.177 07:48:19 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2263822 00:05:25.177 07:48:19 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2263822 00:05:25.436 spdk_app_start is called in Round 0. 00:05:25.436 Shutdown signal received, stop current app iteration 00:05:25.436 Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 reinitialization... 00:05:25.436 spdk_app_start is called in Round 1. 00:05:25.436 Shutdown signal received, stop current app iteration 00:05:25.436 Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 reinitialization... 00:05:25.436 spdk_app_start is called in Round 2. 00:05:25.436 Shutdown signal received, stop current app iteration 00:05:25.436 Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 reinitialization... 00:05:25.436 spdk_app_start is called in Round 3. 00:05:25.436 Shutdown signal received, stop current app iteration 00:05:25.436 07:48:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:25.436 07:48:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:25.436 00:05:25.436 real 0m16.228s 00:05:25.436 user 0m35.588s 00:05:25.436 sys 0m2.515s 00:05:25.436 07:48:19 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.436 07:48:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.436 ************************************ 00:05:25.436 END TEST app_repeat 00:05:25.436 ************************************ 00:05:25.436 07:48:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:25.436 07:48:19 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:25.436 07:48:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.436 07:48:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.436 07:48:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.436 ************************************ 00:05:25.436 START TEST cpu_locks 00:05:25.436 ************************************ 00:05:25.436 07:48:19 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:25.695 * Looking for test storage... 00:05:25.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:25.695 07:48:19 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:25.695 07:48:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:25.695 07:48:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:25.695 07:48:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:25.695 07:48:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.696 07:48:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.696 07:48:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.696 07:48:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:25.696 07:48:19 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.696 07:48:19 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:25.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.696 --rc genhtml_branch_coverage=1 00:05:25.696 --rc genhtml_function_coverage=1 00:05:25.696 --rc genhtml_legend=1 00:05:25.696 --rc geninfo_all_blocks=1 00:05:25.696 --rc geninfo_unexecuted_blocks=1 00:05:25.696 00:05:25.696 ' 00:05:25.696 07:48:19 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:25.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.696 --rc genhtml_branch_coverage=1 00:05:25.696 --rc genhtml_function_coverage=1 00:05:25.696 --rc genhtml_legend=1 00:05:25.696 --rc geninfo_all_blocks=1 00:05:25.696 --rc geninfo_unexecuted_blocks=1 00:05:25.696 00:05:25.696 ' 00:05:25.696 07:48:19 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:25.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.696 --rc genhtml_branch_coverage=1 00:05:25.696 --rc genhtml_function_coverage=1 00:05:25.696 --rc genhtml_legend=1 00:05:25.696 --rc geninfo_all_blocks=1 00:05:25.696 --rc geninfo_unexecuted_blocks=1 00:05:25.696 00:05:25.696 ' 00:05:25.696 07:48:19 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:25.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.696 --rc genhtml_branch_coverage=1 00:05:25.696 --rc genhtml_function_coverage=1 00:05:25.696 --rc genhtml_legend=1 00:05:25.696 --rc geninfo_all_blocks=1 00:05:25.696 --rc geninfo_unexecuted_blocks=1 00:05:25.696 00:05:25.696 ' 00:05:25.696 07:48:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:25.696 07:48:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:25.696 07:48:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:25.696 07:48:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:25.696 07:48:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.696 07:48:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.696 07:48:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.696 ************************************ 00:05:25.696 START TEST default_locks 00:05:25.696 ************************************ 00:05:25.696 07:48:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:25.696 07:48:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2267208 00:05:25.696 07:48:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2267208 00:05:25.696 07:48:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.696 07:48:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2267208 ']' 00:05:25.696 07:48:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.696 07:48:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.696 07:48:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.696 07:48:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.696 07:48:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.696 [2024-11-27 07:48:19.732163] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:25.696 [2024-11-27 07:48:19.732206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2267208 ] 00:05:25.696 [2024-11-27 07:48:19.792535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.955 [2024-11-27 07:48:19.833182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.955 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.955 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:25.955 07:48:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2267208 00:05:25.955 07:48:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2267208 00:05:25.955 07:48:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.521 lslocks: write error 00:05:26.521 07:48:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2267208 00:05:26.521 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2267208 ']' 00:05:26.521 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2267208 00:05:26.521 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:26.521 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.521 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2267208 00:05:26.521 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.521 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.521 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2267208' 00:05:26.521 killing process with pid 2267208 00:05:26.521 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2267208 00:05:26.521 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2267208 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2267208 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2267208 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2267208 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2267208 ']' 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2267208) - No such process 00:05:26.780 ERROR: process (pid: 2267208) is no longer running 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:26.780 00:05:26.780 real 0m1.036s 00:05:26.780 user 0m0.980s 00:05:26.780 sys 0m0.481s 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.780 07:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.780 ************************************ 00:05:26.780 END TEST default_locks 00:05:26.780 ************************************ 00:05:26.780 07:48:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:26.780 07:48:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.780 07:48:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.780 07:48:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.780 ************************************ 00:05:26.780 START TEST default_locks_via_rpc 00:05:26.780 ************************************ 00:05:26.780 07:48:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:26.780 07:48:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.780 07:48:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2267465 00:05:26.780 07:48:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2267465 00:05:26.780 07:48:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2267465 ']' 00:05:26.780 07:48:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.780 07:48:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.780 07:48:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.780 07:48:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.780 07:48:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.780 [2024-11-27 07:48:20.835277] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:26.780 [2024-11-27 07:48:20.835319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2267465 ] 00:05:27.039 [2024-11-27 07:48:20.897553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.039 [2024-11-27 07:48:20.940451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.297 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.297 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:27.297 07:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:27.297 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.297 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.297 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.297 07:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:27.297 07:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:27.297 07:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:27.297 07:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:27.297 07:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:27.297 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.297 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.297 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.297 07:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2267465 00:05:27.297 07:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2267465 00:05:27.297 07:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.556 07:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2267465 00:05:27.556 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2267465 ']' 00:05:27.556 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2267465 00:05:27.556 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:27.556 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.556 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2267465 00:05:27.556 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.556 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.556 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2267465' 00:05:27.556 killing process with pid 2267465 00:05:27.556 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2267465 00:05:27.556 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2267465 00:05:28.124 00:05:28.124 real 0m1.157s 00:05:28.124 user 0m1.111s 00:05:28.124 sys 0m0.520s 00:05:28.124 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.124 07:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.124 ************************************ 00:05:28.124 END TEST default_locks_via_rpc 00:05:28.124 ************************************ 00:05:28.124 07:48:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:28.124 07:48:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.124 07:48:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.124 07:48:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.124 ************************************ 00:05:28.124 START TEST non_locking_app_on_locked_coremask 00:05:28.124 ************************************ 00:05:28.124 07:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:28.124 07:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2267719 00:05:28.124 07:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2267719 /var/tmp/spdk.sock 00:05:28.124 07:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.124 07:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2267719 ']' 00:05:28.124 07:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.124 07:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.124 07:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.124 07:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.124 07:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.124 [2024-11-27 07:48:22.048550] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:28.124 [2024-11-27 07:48:22.048591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2267719 ] 00:05:28.124 [2024-11-27 07:48:22.110281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.125 [2024-11-27 07:48:22.153294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.383 07:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.383 07:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:28.383 07:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2267728 00:05:28.384 07:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2267728 /var/tmp/spdk2.sock 00:05:28.384 07:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:28.384 07:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2267728 ']' 00:05:28.384 07:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.384 07:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.384 07:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.384 07:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.384 07:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.384 [2024-11-27 07:48:22.412236] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:28.384 [2024-11-27 07:48:22.412286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2267728 ] 00:05:28.642 [2024-11-27 07:48:22.499162] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.642 [2024-11-27 07:48:22.499186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.642 [2024-11-27 07:48:22.584476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.210 07:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.210 07:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:29.210 07:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2267719 00:05:29.210 07:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.210 07:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2267719 00:05:29.777 lslocks: write error 00:05:29.777 07:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2267719 00:05:29.777 07:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2267719 ']' 00:05:29.777 07:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2267719 00:05:29.777 07:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:29.777 07:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.777 07:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2267719 00:05:29.777 07:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.777 07:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.777 07:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2267719' 00:05:29.777 killing process with pid 2267719 00:05:29.777 07:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2267719 00:05:29.777 07:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2267719 00:05:30.345 07:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2267728 00:05:30.345 07:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2267728 ']' 00:05:30.345 07:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2267728 00:05:30.345 07:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:30.345 07:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.345 07:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2267728 00:05:30.604 07:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.604 07:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.604 07:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2267728' 00:05:30.604 killing process with pid 2267728 00:05:30.604 07:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2267728 00:05:30.604 07:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2267728 00:05:30.862 00:05:30.862 real 0m2.786s 00:05:30.862 user 0m2.954s 00:05:30.862 sys 0m0.902s 00:05:30.862 07:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.862 07:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.862 ************************************ 00:05:30.862 END TEST non_locking_app_on_locked_coremask 00:05:30.862 ************************************ 00:05:30.862 07:48:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:30.862 07:48:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.862 07:48:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.862 07:48:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.862 ************************************ 00:05:30.862 START TEST locking_app_on_unlocked_coremask 00:05:30.862 ************************************ 00:05:30.862 07:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:30.862 07:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2268220 00:05:30.862 07:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2268220 /var/tmp/spdk.sock 00:05:30.862 07:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:30.862 07:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2268220 ']' 00:05:30.862 07:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.862 07:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.862 07:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.862 07:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.862 07:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.862 [2024-11-27 07:48:24.886579] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:30.862 [2024-11-27 07:48:24.886621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268220 ] 00:05:30.863 [2024-11-27 07:48:24.948304] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.863 [2024-11-27 07:48:24.948330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.121 [2024-11-27 07:48:24.992481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.121 07:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.121 07:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:31.121 07:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2268233 00:05:31.121 07:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2268233 /var/tmp/spdk2.sock 00:05:31.121 07:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:31.121 07:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2268233 ']' 00:05:31.121 07:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.121 07:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.121 07:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.121 07:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.121 07:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.379 [2024-11-27 07:48:25.250945] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:31.379 [2024-11-27 07:48:25.250999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268233 ] 00:05:31.379 [2024-11-27 07:48:25.337053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.379 [2024-11-27 07:48:25.421974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.316 07:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.316 07:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:32.316 07:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2268233 00:05:32.316 07:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2268233 00:05:32.316 07:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.575 lslocks: write error 00:05:32.575 07:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2268220 00:05:32.575 07:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2268220 ']' 00:05:32.575 07:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2268220 00:05:32.575 07:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:32.575 07:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.575 07:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2268220 00:05:32.575 07:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.575 07:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.575 07:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2268220' 00:05:32.575 killing process with pid 2268220 00:05:32.575 07:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2268220 00:05:32.575 07:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2268220 00:05:33.511 07:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2268233 00:05:33.511 07:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2268233 ']' 00:05:33.511 07:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2268233 00:05:33.511 07:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:33.511 07:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.511 07:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2268233 00:05:33.511 07:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.511 07:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.511 07:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2268233' 00:05:33.511 killing process with pid 2268233 00:05:33.511 07:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2268233 00:05:33.511 07:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2268233 00:05:33.770 00:05:33.770 real 0m2.792s 00:05:33.770 user 0m2.946s 00:05:33.770 sys 0m0.936s 00:05:33.770 07:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.770 07:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.770 ************************************ 00:05:33.770 END TEST locking_app_on_unlocked_coremask 00:05:33.770 ************************************ 00:05:33.770 07:48:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:33.770 07:48:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.770 07:48:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.770 07:48:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.770 ************************************ 00:05:33.770 START TEST locking_app_on_locked_coremask 00:05:33.770 ************************************ 00:05:33.770 07:48:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:33.770 07:48:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2268721 00:05:33.770 07:48:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2268721 /var/tmp/spdk.sock 00:05:33.770 07:48:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.770 07:48:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2268721 ']' 00:05:33.770 07:48:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.770 07:48:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.770 07:48:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.770 07:48:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.770 07:48:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.770 [2024-11-27 07:48:27.750407] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:33.770 [2024-11-27 07:48:27.750454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268721 ] 00:05:33.770 [2024-11-27 07:48:27.811732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.770 [2024-11-27 07:48:27.849902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.029 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.029 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:34.029 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2268724 00:05:34.029 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2268724 /var/tmp/spdk2.sock 00:05:34.029 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:34.029 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:34.029 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2268724 /var/tmp/spdk2.sock 00:05:34.029 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:34.029 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.029 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:34.029 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.029 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2268724 /var/tmp/spdk2.sock 00:05:34.029 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2268724 ']' 00:05:34.029 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.029 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.029 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.029 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.029 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.029 [2024-11-27 07:48:28.109099] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:34.029 [2024-11-27 07:48:28.109144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268724 ] 00:05:34.288 [2024-11-27 07:48:28.197646] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2268721 has claimed it. 00:05:34.288 [2024-11-27 07:48:28.197687] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:34.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2268724) - No such process 00:05:34.853 ERROR: process (pid: 2268724) is no longer running 00:05:34.853 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.853 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:34.853 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:34.853 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.853 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:34.853 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.853 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2268721 00:05:34.853 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2268721 00:05:34.853 07:48:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.421 lslocks: write error 00:05:35.421 07:48:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2268721 00:05:35.421 07:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2268721 ']' 00:05:35.421 07:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2268721 00:05:35.421 07:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:35.421 07:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.421 07:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2268721 00:05:35.421 07:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.421 07:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.421 07:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2268721' 00:05:35.421 killing process with pid 2268721 00:05:35.421 07:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2268721 00:05:35.421 07:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2268721 00:05:35.680 00:05:35.680 real 0m1.883s 00:05:35.680 user 0m2.024s 00:05:35.680 sys 0m0.650s 00:05:35.680 07:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.680 07:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.680 ************************************ 00:05:35.680 END TEST locking_app_on_locked_coremask 00:05:35.680 ************************************ 00:05:35.680 07:48:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:35.680 07:48:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.680 07:48:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.680 07:48:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.680 ************************************ 00:05:35.680 START TEST locking_overlapped_coremask 00:05:35.680 ************************************ 00:05:35.680 07:48:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:35.680 07:48:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2268996 00:05:35.680 07:48:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2268996 /var/tmp/spdk.sock 00:05:35.680 07:48:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2268996 ']' 00:05:35.680 07:48:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.680 07:48:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.680 07:48:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.680 07:48:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:35.680 07:48:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.680 07:48:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.680 [2024-11-27 07:48:29.684972] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:35.680 [2024-11-27 07:48:29.685012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268996 ] 00:05:35.680 [2024-11-27 07:48:29.747516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.939 [2024-11-27 07:48:29.793287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.939 [2024-11-27 07:48:29.793383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.939 [2024-11-27 07:48:29.793383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.939 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.939 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:35.939 07:48:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2269114 00:05:35.939 07:48:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2269114 /var/tmp/spdk2.sock 00:05:35.939 07:48:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:35.939 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:35.939 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2269114 /var/tmp/spdk2.sock 00:05:35.939 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:35.939 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.939 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:35.939 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.939 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2269114 /var/tmp/spdk2.sock 00:05:35.939 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2269114 ']' 00:05:35.939 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.939 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.939 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.939 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.939 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.197 [2024-11-27 07:48:30.059903] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:36.197 [2024-11-27 07:48:30.059971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269114 ] 00:05:36.197 [2024-11-27 07:48:30.151952] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2268996 has claimed it. 00:05:36.197 [2024-11-27 07:48:30.151986] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:36.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2269114) - No such process 00:05:36.827 ERROR: process (pid: 2269114) is no longer running 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2268996 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2268996 ']' 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2268996 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2268996 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2268996' 00:05:36.827 killing process with pid 2268996 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2268996 00:05:36.827 07:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2268996 00:05:37.092 00:05:37.092 real 0m1.434s 00:05:37.092 user 0m3.972s 00:05:37.092 sys 0m0.393s 00:05:37.092 07:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.092 07:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.092 ************************************ 00:05:37.092 END TEST locking_overlapped_coremask 00:05:37.092 ************************************ 00:05:37.092 07:48:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:37.092 07:48:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.092 07:48:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.092 07:48:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.092 ************************************ 00:05:37.092 START TEST locking_overlapped_coremask_via_rpc 00:05:37.092 ************************************ 00:05:37.092 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:37.092 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2269262 00:05:37.092 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2269262 /var/tmp/spdk.sock 00:05:37.092 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:37.092 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2269262 ']' 00:05:37.092 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.092 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.092 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.092 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.092 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.092 [2024-11-27 07:48:31.192131] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:37.092 [2024-11-27 07:48:31.192179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269262 ] 00:05:37.351 [2024-11-27 07:48:31.256938] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.351 [2024-11-27 07:48:31.256970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:37.351 [2024-11-27 07:48:31.300694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.351 [2024-11-27 07:48:31.300792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.351 [2024-11-27 07:48:31.300794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.609 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.609 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:37.609 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2269487 00:05:37.609 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2269487 /var/tmp/spdk2.sock 00:05:37.609 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:37.609 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2269487 ']' 00:05:37.609 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.609 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.609 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.609 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.609 07:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.609 [2024-11-27 07:48:31.574467] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:37.609 [2024-11-27 07:48:31.574516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269487 ] 00:05:37.609 [2024-11-27 07:48:31.667185] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.609 [2024-11-27 07:48:31.667217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:37.868 [2024-11-27 07:48:31.755132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.868 [2024-11-27 07:48:31.755249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.868 [2024-11-27 07:48:31.755251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.436 [2024-11-27 07:48:32.418021] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2269262 has claimed it. 00:05:38.436 request: 00:05:38.436 { 00:05:38.436 "method": "framework_enable_cpumask_locks", 00:05:38.436 "req_id": 1 00:05:38.436 } 00:05:38.436 Got JSON-RPC error response 00:05:38.436 response: 00:05:38.436 { 00:05:38.436 "code": -32603, 00:05:38.436 "message": "Failed to claim CPU core: 2" 00:05:38.436 } 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2269262 /var/tmp/spdk.sock 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2269262 ']' 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.436 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.694 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.694 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:38.694 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2269487 /var/tmp/spdk2.sock 00:05:38.694 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2269487 ']' 00:05:38.694 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.694 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.694 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.694 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.694 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.953 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.953 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:38.953 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:38.953 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:38.953 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:38.953 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:38.953 00:05:38.953 real 0m1.695s 00:05:38.953 user 0m0.820s 00:05:38.953 sys 0m0.136s 00:05:38.953 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.953 07:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.953 ************************************ 00:05:38.953 END TEST locking_overlapped_coremask_via_rpc 00:05:38.953 ************************************ 00:05:38.953 07:48:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:38.953 07:48:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2269262 ]] 00:05:38.953 07:48:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2269262 00:05:38.954 07:48:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2269262 ']' 00:05:38.954 07:48:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2269262 00:05:38.954 07:48:32 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:38.954 07:48:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.954 07:48:32 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2269262 00:05:38.954 07:48:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.954 07:48:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.954 07:48:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2269262' 00:05:38.954 killing process with pid 2269262 00:05:38.954 07:48:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2269262 00:05:38.954 07:48:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2269262 00:05:39.212 07:48:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2269487 ]] 00:05:39.212 07:48:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2269487 00:05:39.212 07:48:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2269487 ']' 00:05:39.212 07:48:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2269487 00:05:39.212 07:48:33 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:39.212 07:48:33 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.212 07:48:33 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2269487 00:05:39.212 07:48:33 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:39.212 07:48:33 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:39.212 07:48:33 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2269487' 00:05:39.212 killing process with pid 2269487 00:05:39.213 07:48:33 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2269487 00:05:39.213 07:48:33 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2269487 00:05:39.781 07:48:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:39.781 07:48:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:39.781 07:48:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2269262 ]] 00:05:39.781 07:48:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2269262 00:05:39.781 07:48:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2269262 ']' 00:05:39.781 07:48:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2269262 00:05:39.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2269262) - No such process 00:05:39.781 07:48:33 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2269262 is not found' 00:05:39.781 Process with pid 2269262 is not found 00:05:39.781 07:48:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2269487 ]] 00:05:39.781 07:48:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2269487 00:05:39.781 07:48:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2269487 ']' 00:05:39.781 07:48:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2269487 00:05:39.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2269487) - No such process 00:05:39.781 07:48:33 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2269487 is not found' 00:05:39.781 Process with pid 2269487 is not found 00:05:39.781 07:48:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:39.781 00:05:39.781 real 0m14.138s 00:05:39.781 user 0m24.534s 00:05:39.781 sys 0m4.940s 00:05:39.781 07:48:33 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.781 07:48:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.781 ************************************ 00:05:39.781 END TEST cpu_locks 00:05:39.781 ************************************ 00:05:39.781 00:05:39.781 real 0m38.884s 00:05:39.781 user 1m14.525s 00:05:39.781 sys 0m8.342s 00:05:39.781 07:48:33 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.781 07:48:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.781 ************************************ 00:05:39.781 END TEST event 00:05:39.781 ************************************ 00:05:39.781 07:48:33 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:39.781 07:48:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.781 07:48:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.781 07:48:33 -- common/autotest_common.sh@10 -- # set +x 00:05:39.781 ************************************ 00:05:39.781 START TEST thread 00:05:39.781 ************************************ 00:05:39.781 07:48:33 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:39.781 * Looking for test storage... 00:05:39.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:39.781 07:48:33 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.781 07:48:33 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.781 07:48:33 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.781 07:48:33 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.781 07:48:33 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.781 07:48:33 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.781 07:48:33 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.781 07:48:33 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.781 07:48:33 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.781 07:48:33 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.781 07:48:33 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.781 07:48:33 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.781 07:48:33 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.781 07:48:33 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.781 07:48:33 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.781 07:48:33 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:39.781 07:48:33 thread -- scripts/common.sh@345 -- # : 1 00:05:39.781 07:48:33 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.781 07:48:33 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.781 07:48:33 thread -- scripts/common.sh@365 -- # decimal 1 00:05:39.781 07:48:33 thread -- scripts/common.sh@353 -- # local d=1 00:05:39.781 07:48:33 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.781 07:48:33 thread -- scripts/common.sh@355 -- # echo 1 00:05:39.781 07:48:33 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.781 07:48:33 thread -- scripts/common.sh@366 -- # decimal 2 00:05:39.781 07:48:33 thread -- scripts/common.sh@353 -- # local d=2 00:05:39.781 07:48:33 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.781 07:48:33 thread -- scripts/common.sh@355 -- # echo 2 00:05:39.781 07:48:33 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.781 07:48:33 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.781 07:48:33 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.781 07:48:33 thread -- scripts/common.sh@368 -- # return 0 00:05:39.781 07:48:33 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.781 07:48:33 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.781 --rc genhtml_branch_coverage=1 00:05:39.781 --rc genhtml_function_coverage=1 00:05:39.781 --rc genhtml_legend=1 00:05:39.781 --rc geninfo_all_blocks=1 00:05:39.781 --rc geninfo_unexecuted_blocks=1 00:05:39.781 00:05:39.781 ' 00:05:39.781 07:48:33 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.781 --rc genhtml_branch_coverage=1 00:05:39.781 --rc genhtml_function_coverage=1 00:05:39.781 --rc genhtml_legend=1 00:05:39.781 --rc geninfo_all_blocks=1 00:05:39.781 --rc geninfo_unexecuted_blocks=1 00:05:39.781 00:05:39.781 ' 00:05:39.781 07:48:33 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.781 --rc genhtml_branch_coverage=1 00:05:39.781 --rc genhtml_function_coverage=1 00:05:39.781 --rc genhtml_legend=1 00:05:39.781 --rc geninfo_all_blocks=1 00:05:39.781 --rc geninfo_unexecuted_blocks=1 00:05:39.781 00:05:39.781 ' 00:05:39.781 07:48:33 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.781 --rc genhtml_branch_coverage=1 00:05:39.781 --rc genhtml_function_coverage=1 00:05:39.781 --rc genhtml_legend=1 00:05:39.781 --rc geninfo_all_blocks=1 00:05:39.781 --rc geninfo_unexecuted_blocks=1 00:05:39.781 00:05:39.781 ' 00:05:39.781 07:48:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:39.781 07:48:33 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:39.781 07:48:33 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.781 07:48:33 thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.040 ************************************ 00:05:40.040 START TEST thread_poller_perf 00:05:40.040 ************************************ 00:05:40.040 07:48:33 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:40.040 [2024-11-27 07:48:33.920891] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:40.040 [2024-11-27 07:48:33.920969] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269835 ] 00:05:40.040 [2024-11-27 07:48:33.987973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.040 [2024-11-27 07:48:34.028077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.040 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:40.976 [2024-11-27T06:48:35.085Z] ====================================== 00:05:40.976 [2024-11-27T06:48:35.085Z] busy:2306802190 (cyc) 00:05:40.976 [2024-11-27T06:48:35.085Z] total_run_count: 414000 00:05:40.976 [2024-11-27T06:48:35.085Z] tsc_hz: 2300000000 (cyc) 00:05:40.976 [2024-11-27T06:48:35.085Z] ====================================== 00:05:40.976 [2024-11-27T06:48:35.085Z] poller_cost: 5571 (cyc), 2422 (nsec) 00:05:40.976 00:05:40.976 real 0m1.172s 00:05:40.976 user 0m1.099s 00:05:40.976 sys 0m0.068s 00:05:40.976 07:48:35 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.976 07:48:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:40.976 ************************************ 00:05:40.976 END TEST thread_poller_perf 00:05:40.976 ************************************ 00:05:41.234 07:48:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:41.234 07:48:35 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:41.234 07:48:35 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.234 07:48:35 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.234 ************************************ 00:05:41.234 START TEST thread_poller_perf 00:05:41.234 ************************************ 00:05:41.234 07:48:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:41.234 [2024-11-27 07:48:35.164521] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:41.234 [2024-11-27 07:48:35.164610] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270082 ] 00:05:41.234 [2024-11-27 07:48:35.232650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.234 [2024-11-27 07:48:35.272790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.234 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:42.609 [2024-11-27T06:48:36.718Z] ====================================== 00:05:42.609 [2024-11-27T06:48:36.718Z] busy:2301614830 (cyc) 00:05:42.610 [2024-11-27T06:48:36.719Z] total_run_count: 5407000 00:05:42.610 [2024-11-27T06:48:36.719Z] tsc_hz: 2300000000 (cyc) 00:05:42.610 [2024-11-27T06:48:36.719Z] ====================================== 00:05:42.610 [2024-11-27T06:48:36.719Z] poller_cost: 425 (cyc), 184 (nsec) 00:05:42.610 00:05:42.610 real 0m1.168s 00:05:42.610 user 0m1.094s 00:05:42.610 sys 0m0.070s 00:05:42.610 07:48:36 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.610 07:48:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.610 ************************************ 00:05:42.610 END TEST thread_poller_perf 00:05:42.610 ************************************ 00:05:42.610 07:48:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:42.610 00:05:42.610 real 0m2.644s 00:05:42.610 user 0m2.347s 00:05:42.610 sys 0m0.312s 00:05:42.610 07:48:36 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.610 07:48:36 thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.610 ************************************ 00:05:42.610 END TEST thread 00:05:42.610 ************************************ 00:05:42.610 07:48:36 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:42.610 07:48:36 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:42.610 07:48:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.610 07:48:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.610 07:48:36 -- common/autotest_common.sh@10 -- # set +x 00:05:42.610 ************************************ 00:05:42.610 START TEST app_cmdline 00:05:42.610 ************************************ 00:05:42.610 07:48:36 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:42.610 * Looking for test storage... 00:05:42.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:42.610 07:48:36 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:42.610 07:48:36 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:42.610 07:48:36 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:42.610 07:48:36 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.610 07:48:36 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:42.610 07:48:36 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.610 07:48:36 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:42.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.610 --rc genhtml_branch_coverage=1 00:05:42.610 --rc genhtml_function_coverage=1 00:05:42.610 --rc genhtml_legend=1 00:05:42.610 --rc geninfo_all_blocks=1 00:05:42.610 --rc geninfo_unexecuted_blocks=1 00:05:42.610 00:05:42.610 ' 00:05:42.610 07:48:36 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:42.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.610 --rc genhtml_branch_coverage=1 00:05:42.610 --rc genhtml_function_coverage=1 00:05:42.610 --rc genhtml_legend=1 00:05:42.610 --rc geninfo_all_blocks=1 00:05:42.610 --rc geninfo_unexecuted_blocks=1 00:05:42.610 00:05:42.610 ' 00:05:42.610 07:48:36 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:42.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.610 --rc genhtml_branch_coverage=1 00:05:42.610 --rc genhtml_function_coverage=1 00:05:42.610 --rc genhtml_legend=1 00:05:42.610 --rc geninfo_all_blocks=1 00:05:42.610 --rc geninfo_unexecuted_blocks=1 00:05:42.610 00:05:42.610 ' 00:05:42.610 07:48:36 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:42.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.610 --rc genhtml_branch_coverage=1 00:05:42.610 --rc genhtml_function_coverage=1 00:05:42.610 --rc genhtml_legend=1 00:05:42.610 --rc geninfo_all_blocks=1 00:05:42.610 --rc geninfo_unexecuted_blocks=1 00:05:42.610 00:05:42.610 ' 00:05:42.610 07:48:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:42.610 07:48:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2270388 00:05:42.610 07:48:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2270388 00:05:42.610 07:48:36 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2270388 ']' 00:05:42.610 07:48:36 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.610 07:48:36 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.610 07:48:36 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.610 07:48:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:42.610 07:48:36 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.610 07:48:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:42.610 [2024-11-27 07:48:36.617957] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:42.610 [2024-11-27 07:48:36.618008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270388 ] 00:05:42.610 [2024-11-27 07:48:36.679898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.868 [2024-11-27 07:48:36.721871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.868 07:48:36 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.868 07:48:36 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:42.868 07:48:36 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:43.126 { 00:05:43.126 "version": "SPDK v25.01-pre git sha1 4c65c6406", 00:05:43.126 "fields": { 00:05:43.126 "major": 25, 00:05:43.126 "minor": 1, 00:05:43.126 "patch": 0, 00:05:43.126 "suffix": "-pre", 00:05:43.126 "commit": "4c65c6406" 00:05:43.126 } 00:05:43.126 } 00:05:43.126 07:48:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:43.126 07:48:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:43.126 07:48:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:43.126 07:48:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:43.126 07:48:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:43.126 07:48:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:43.126 07:48:37 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.126 07:48:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:43.126 07:48:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:43.126 07:48:37 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.126 07:48:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:43.126 07:48:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:43.126 07:48:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:43.126 07:48:37 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:43.126 07:48:37 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:43.126 07:48:37 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:43.126 07:48:37 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.126 07:48:37 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:43.126 07:48:37 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.126 07:48:37 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:43.126 07:48:37 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.126 07:48:37 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:43.126 07:48:37 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:43.126 07:48:37 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:43.383 request: 00:05:43.383 { 00:05:43.383 "method": "env_dpdk_get_mem_stats", 00:05:43.383 "req_id": 1 00:05:43.383 } 00:05:43.383 Got JSON-RPC error response 00:05:43.383 response: 00:05:43.383 { 00:05:43.383 "code": -32601, 00:05:43.383 "message": "Method not found" 00:05:43.383 } 00:05:43.383 07:48:37 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:43.383 07:48:37 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:43.383 07:48:37 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:43.383 07:48:37 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:43.383 07:48:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2270388 00:05:43.383 07:48:37 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2270388 ']' 00:05:43.383 07:48:37 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2270388 00:05:43.383 07:48:37 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:43.383 07:48:37 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.383 07:48:37 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2270388 00:05:43.383 07:48:37 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.383 07:48:37 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.383 07:48:37 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2270388' 00:05:43.384 killing process with pid 2270388 00:05:43.384 07:48:37 app_cmdline -- common/autotest_common.sh@973 -- # kill 2270388 00:05:43.384 07:48:37 app_cmdline -- common/autotest_common.sh@978 -- # wait 2270388 00:05:43.641 00:05:43.641 real 0m1.288s 00:05:43.641 user 0m1.528s 00:05:43.641 sys 0m0.404s 00:05:43.641 07:48:37 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.641 07:48:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:43.641 ************************************ 00:05:43.641 END TEST app_cmdline 00:05:43.641 ************************************ 00:05:43.642 07:48:37 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:43.642 07:48:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.642 07:48:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.642 07:48:37 -- common/autotest_common.sh@10 -- # set +x 00:05:43.900 ************************************ 00:05:43.900 START TEST version 00:05:43.900 ************************************ 00:05:43.900 07:48:37 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:43.900 * Looking for test storage... 00:05:43.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:43.900 07:48:37 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.900 07:48:37 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.900 07:48:37 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.900 07:48:37 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.900 07:48:37 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.900 07:48:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.900 07:48:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.900 07:48:37 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.900 07:48:37 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.900 07:48:37 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.900 07:48:37 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.900 07:48:37 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.900 07:48:37 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.900 07:48:37 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.900 07:48:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.900 07:48:37 version -- scripts/common.sh@344 -- # case "$op" in 00:05:43.900 07:48:37 version -- scripts/common.sh@345 -- # : 1 00:05:43.900 07:48:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.900 07:48:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.900 07:48:37 version -- scripts/common.sh@365 -- # decimal 1 00:05:43.900 07:48:37 version -- scripts/common.sh@353 -- # local d=1 00:05:43.900 07:48:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.900 07:48:37 version -- scripts/common.sh@355 -- # echo 1 00:05:43.900 07:48:37 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.900 07:48:37 version -- scripts/common.sh@366 -- # decimal 2 00:05:43.900 07:48:37 version -- scripts/common.sh@353 -- # local d=2 00:05:43.900 07:48:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.900 07:48:37 version -- scripts/common.sh@355 -- # echo 2 00:05:43.900 07:48:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.900 07:48:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.900 07:48:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.900 07:48:37 version -- scripts/common.sh@368 -- # return 0 00:05:43.900 07:48:37 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.900 07:48:37 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.900 --rc genhtml_branch_coverage=1 00:05:43.900 --rc genhtml_function_coverage=1 00:05:43.900 --rc genhtml_legend=1 00:05:43.900 --rc geninfo_all_blocks=1 00:05:43.900 --rc geninfo_unexecuted_blocks=1 00:05:43.900 00:05:43.900 ' 00:05:43.900 07:48:37 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.901 --rc genhtml_branch_coverage=1 00:05:43.901 --rc genhtml_function_coverage=1 00:05:43.901 --rc genhtml_legend=1 00:05:43.901 --rc geninfo_all_blocks=1 00:05:43.901 --rc geninfo_unexecuted_blocks=1 00:05:43.901 00:05:43.901 ' 00:05:43.901 07:48:37 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.901 --rc genhtml_branch_coverage=1 00:05:43.901 --rc genhtml_function_coverage=1 00:05:43.901 --rc genhtml_legend=1 00:05:43.901 --rc geninfo_all_blocks=1 00:05:43.901 --rc geninfo_unexecuted_blocks=1 00:05:43.901 00:05:43.901 ' 00:05:43.901 07:48:37 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.901 --rc genhtml_branch_coverage=1 00:05:43.901 --rc genhtml_function_coverage=1 00:05:43.901 --rc genhtml_legend=1 00:05:43.901 --rc geninfo_all_blocks=1 00:05:43.901 --rc geninfo_unexecuted_blocks=1 00:05:43.901 00:05:43.901 ' 00:05:43.901 07:48:37 version -- app/version.sh@17 -- # get_header_version major 00:05:43.901 07:48:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:43.901 07:48:37 version -- app/version.sh@14 -- # cut -f2 00:05:43.901 07:48:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:43.901 07:48:37 version -- app/version.sh@17 -- # major=25 00:05:43.901 07:48:37 version -- app/version.sh@18 -- # get_header_version minor 00:05:43.901 07:48:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:43.901 07:48:37 version -- app/version.sh@14 -- # cut -f2 00:05:43.901 07:48:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:43.901 07:48:37 version -- app/version.sh@18 -- # minor=1 00:05:43.901 07:48:37 version -- app/version.sh@19 -- # get_header_version patch 00:05:43.901 07:48:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:43.901 07:48:37 version -- app/version.sh@14 -- # cut -f2 00:05:43.901 07:48:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:43.901 07:48:37 version -- app/version.sh@19 -- # patch=0 00:05:43.901 07:48:37 version -- app/version.sh@20 -- # get_header_version suffix 00:05:43.901 07:48:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:43.901 07:48:37 version -- app/version.sh@14 -- # cut -f2 00:05:43.901 07:48:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:43.901 07:48:37 version -- app/version.sh@20 -- # suffix=-pre 00:05:43.901 07:48:37 version -- app/version.sh@22 -- # version=25.1 00:05:43.901 07:48:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:43.901 07:48:37 version -- app/version.sh@28 -- # version=25.1rc0 00:05:43.901 07:48:37 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:43.901 07:48:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:43.901 07:48:37 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:43.901 07:48:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:43.901 00:05:43.901 real 0m0.224s 00:05:43.901 user 0m0.145s 00:05:43.901 sys 0m0.119s 00:05:43.901 07:48:37 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.901 07:48:37 version -- common/autotest_common.sh@10 -- # set +x 00:05:43.901 ************************************ 00:05:43.901 END TEST version 00:05:43.901 ************************************ 00:05:44.159 07:48:38 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:44.159 07:48:38 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:44.159 07:48:38 -- spdk/autotest.sh@194 -- # uname -s 00:05:44.159 07:48:38 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:44.159 07:48:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:44.159 07:48:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:44.159 07:48:38 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:44.159 07:48:38 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:44.159 07:48:38 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:44.159 07:48:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:44.159 07:48:38 -- common/autotest_common.sh@10 -- # set +x 00:05:44.159 07:48:38 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:44.159 07:48:38 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:44.159 07:48:38 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:44.159 07:48:38 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:44.159 07:48:38 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:44.159 07:48:38 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:44.159 07:48:38 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:44.159 07:48:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:44.159 07:48:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.159 07:48:38 -- common/autotest_common.sh@10 -- # set +x 00:05:44.159 ************************************ 00:05:44.159 START TEST nvmf_tcp 00:05:44.159 ************************************ 00:05:44.159 07:48:38 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:44.159 * Looking for test storage... 00:05:44.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:44.159 07:48:38 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.159 07:48:38 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.159 07:48:38 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.159 07:48:38 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.159 07:48:38 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.159 07:48:38 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.159 07:48:38 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.159 07:48:38 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.159 07:48:38 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.159 07:48:38 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.159 07:48:38 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.159 07:48:38 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.159 07:48:38 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.159 07:48:38 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.159 07:48:38 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.159 07:48:38 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:44.159 07:48:38 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:44.159 07:48:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.159 07:48:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.419 07:48:38 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:44.419 07:48:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:44.419 07:48:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.419 07:48:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:44.419 07:48:38 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.419 07:48:38 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:44.419 07:48:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:44.419 07:48:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.419 07:48:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:44.419 07:48:38 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.419 07:48:38 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.419 07:48:38 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.419 07:48:38 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:44.419 07:48:38 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.419 07:48:38 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.419 --rc genhtml_branch_coverage=1 00:05:44.419 --rc genhtml_function_coverage=1 00:05:44.419 --rc genhtml_legend=1 00:05:44.419 --rc geninfo_all_blocks=1 00:05:44.419 --rc geninfo_unexecuted_blocks=1 00:05:44.419 00:05:44.419 ' 00:05:44.419 07:48:38 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.419 --rc genhtml_branch_coverage=1 00:05:44.420 --rc genhtml_function_coverage=1 00:05:44.420 --rc genhtml_legend=1 00:05:44.420 --rc geninfo_all_blocks=1 00:05:44.420 --rc geninfo_unexecuted_blocks=1 00:05:44.420 00:05:44.420 ' 00:05:44.420 07:48:38 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.420 --rc genhtml_branch_coverage=1 00:05:44.420 --rc genhtml_function_coverage=1 00:05:44.420 --rc genhtml_legend=1 00:05:44.420 --rc geninfo_all_blocks=1 00:05:44.420 --rc geninfo_unexecuted_blocks=1 00:05:44.420 00:05:44.420 ' 00:05:44.420 07:48:38 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.420 --rc genhtml_branch_coverage=1 00:05:44.420 --rc genhtml_function_coverage=1 00:05:44.420 --rc genhtml_legend=1 00:05:44.420 --rc geninfo_all_blocks=1 00:05:44.420 --rc geninfo_unexecuted_blocks=1 00:05:44.420 00:05:44.420 ' 00:05:44.420 07:48:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:44.420 07:48:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:44.420 07:48:38 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:44.420 07:48:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:44.420 07:48:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.420 07:48:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.420 ************************************ 00:05:44.420 START TEST nvmf_target_core 00:05:44.420 ************************************ 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:44.420 * Looking for test storage... 00:05:44.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.420 --rc genhtml_branch_coverage=1 00:05:44.420 --rc genhtml_function_coverage=1 00:05:44.420 --rc genhtml_legend=1 00:05:44.420 --rc geninfo_all_blocks=1 00:05:44.420 --rc geninfo_unexecuted_blocks=1 00:05:44.420 00:05:44.420 ' 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.420 --rc genhtml_branch_coverage=1 00:05:44.420 --rc genhtml_function_coverage=1 00:05:44.420 --rc genhtml_legend=1 00:05:44.420 --rc geninfo_all_blocks=1 00:05:44.420 --rc geninfo_unexecuted_blocks=1 00:05:44.420 00:05:44.420 ' 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.420 --rc genhtml_branch_coverage=1 00:05:44.420 --rc genhtml_function_coverage=1 00:05:44.420 --rc genhtml_legend=1 00:05:44.420 --rc geninfo_all_blocks=1 00:05:44.420 --rc geninfo_unexecuted_blocks=1 00:05:44.420 00:05:44.420 ' 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.420 --rc genhtml_branch_coverage=1 00:05:44.420 --rc genhtml_function_coverage=1 00:05:44.420 --rc genhtml_legend=1 00:05:44.420 --rc geninfo_all_blocks=1 00:05:44.420 --rc geninfo_unexecuted_blocks=1 00:05:44.420 00:05:44.420 ' 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:44.420 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:44.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.421 07:48:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:44.680 ************************************ 00:05:44.680 START TEST nvmf_abort 00:05:44.680 ************************************ 00:05:44.680 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:44.680 * Looking for test storage... 00:05:44.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:44.680 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.680 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.680 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.680 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.680 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.680 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.680 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.680 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.680 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.680 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.680 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.680 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.680 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.681 --rc genhtml_branch_coverage=1 00:05:44.681 --rc genhtml_function_coverage=1 00:05:44.681 --rc genhtml_legend=1 00:05:44.681 --rc geninfo_all_blocks=1 00:05:44.681 --rc geninfo_unexecuted_blocks=1 00:05:44.681 00:05:44.681 ' 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.681 --rc genhtml_branch_coverage=1 00:05:44.681 --rc genhtml_function_coverage=1 00:05:44.681 --rc genhtml_legend=1 00:05:44.681 --rc geninfo_all_blocks=1 00:05:44.681 --rc geninfo_unexecuted_blocks=1 00:05:44.681 00:05:44.681 ' 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.681 --rc genhtml_branch_coverage=1 00:05:44.681 --rc genhtml_function_coverage=1 00:05:44.681 --rc genhtml_legend=1 00:05:44.681 --rc geninfo_all_blocks=1 00:05:44.681 --rc geninfo_unexecuted_blocks=1 00:05:44.681 00:05:44.681 ' 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.681 --rc genhtml_branch_coverage=1 00:05:44.681 --rc genhtml_function_coverage=1 00:05:44.681 --rc genhtml_legend=1 00:05:44.681 --rc geninfo_all_blocks=1 00:05:44.681 --rc geninfo_unexecuted_blocks=1 00:05:44.681 00:05:44.681 ' 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:44.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:44.681 07:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:51.289 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:51.290 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:51.290 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:51.290 Found net devices under 0000:86:00.0: cvl_0_0 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:51.290 Found net devices under 0000:86:00.1: cvl_0_1 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:51.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:51.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:05:51.290 00:05:51.290 --- 10.0.0.2 ping statistics --- 00:05:51.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:51.290 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:51.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:51.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:05:51.290 00:05:51.290 --- 10.0.0.1 ping statistics --- 00:05:51.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:51.290 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2274051 00:05:51.290 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2274051 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2274051 ']' 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.291 [2024-11-27 07:48:44.448190] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:05:51.291 [2024-11-27 07:48:44.448237] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:51.291 [2024-11-27 07:48:44.515252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:51.291 [2024-11-27 07:48:44.559274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:51.291 [2024-11-27 07:48:44.559312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:51.291 [2024-11-27 07:48:44.559319] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:51.291 [2024-11-27 07:48:44.559325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:51.291 [2024-11-27 07:48:44.559330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:51.291 [2024-11-27 07:48:44.560809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.291 [2024-11-27 07:48:44.560900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.291 [2024-11-27 07:48:44.560901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.291 [2024-11-27 07:48:44.698494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.291 Malloc0 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.291 Delay0 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.291 [2024-11-27 07:48:44.769562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.291 07:48:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:51.291 [2024-11-27 07:48:44.928133] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:53.258 Initializing NVMe Controllers 00:05:53.258 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:53.258 controller IO queue size 128 less than required 00:05:53.258 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:53.258 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:53.258 Initialization complete. Launching workers. 00:05:53.258 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 36067 00:05:53.258 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36131, failed to submit 62 00:05:53.258 success 36071, unsuccessful 60, failed 0 00:05:53.258 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:53.258 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.258 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.258 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.258 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:53.258 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:53.259 rmmod nvme_tcp 00:05:53.259 rmmod nvme_fabrics 00:05:53.259 rmmod nvme_keyring 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2274051 ']' 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2274051 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2274051 ']' 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2274051 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2274051 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2274051' 00:05:53.259 killing process with pid 2274051 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2274051 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2274051 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:53.259 07:48:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:55.796 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:55.796 00:05:55.796 real 0m10.825s 00:05:55.796 user 0m11.535s 00:05:55.796 sys 0m5.162s 00:05:55.796 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.796 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:55.797 ************************************ 00:05:55.797 END TEST nvmf_abort 00:05:55.797 ************************************ 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:55.797 ************************************ 00:05:55.797 START TEST nvmf_ns_hotplug_stress 00:05:55.797 ************************************ 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:55.797 * Looking for test storage... 00:05:55.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.797 --rc genhtml_branch_coverage=1 00:05:55.797 --rc genhtml_function_coverage=1 00:05:55.797 --rc genhtml_legend=1 00:05:55.797 --rc geninfo_all_blocks=1 00:05:55.797 --rc geninfo_unexecuted_blocks=1 00:05:55.797 00:05:55.797 ' 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.797 --rc genhtml_branch_coverage=1 00:05:55.797 --rc genhtml_function_coverage=1 00:05:55.797 --rc genhtml_legend=1 00:05:55.797 --rc geninfo_all_blocks=1 00:05:55.797 --rc geninfo_unexecuted_blocks=1 00:05:55.797 00:05:55.797 ' 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.797 --rc genhtml_branch_coverage=1 00:05:55.797 --rc genhtml_function_coverage=1 00:05:55.797 --rc genhtml_legend=1 00:05:55.797 --rc geninfo_all_blocks=1 00:05:55.797 --rc geninfo_unexecuted_blocks=1 00:05:55.797 00:05:55.797 ' 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.797 --rc genhtml_branch_coverage=1 00:05:55.797 --rc genhtml_function_coverage=1 00:05:55.797 --rc genhtml_legend=1 00:05:55.797 --rc geninfo_all_blocks=1 00:05:55.797 --rc geninfo_unexecuted_blocks=1 00:05:55.797 00:05:55.797 ' 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:55.797 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:55.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:55.798 07:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:02.365 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:02.365 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:02.365 Found net devices under 0000:86:00.0: cvl_0_0 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:02.365 Found net devices under 0000:86:00.1: cvl_0_1 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:02.365 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:02.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:02.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:06:02.366 00:06:02.366 --- 10.0.0.2 ping statistics --- 00:06:02.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:02.366 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:02.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:02.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:06:02.366 00:06:02.366 --- 10.0.0.1 ping statistics --- 00:06:02.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:02.366 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2278079 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2278079 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2278079 ']' 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:02.366 [2024-11-27 07:48:55.540727] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:06:02.366 [2024-11-27 07:48:55.540771] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:02.366 [2024-11-27 07:48:55.607869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.366 [2024-11-27 07:48:55.649890] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:02.366 [2024-11-27 07:48:55.649928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:02.366 [2024-11-27 07:48:55.649935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:02.366 [2024-11-27 07:48:55.649941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:02.366 [2024-11-27 07:48:55.649949] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:02.366 [2024-11-27 07:48:55.651287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.366 [2024-11-27 07:48:55.651376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.366 [2024-11-27 07:48:55.651378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:02.366 [2024-11-27 07:48:55.957567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.366 07:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:02.366 07:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:02.366 [2024-11-27 07:48:56.379061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:02.366 07:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:02.625 07:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:02.884 Malloc0 00:06:02.884 07:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:03.142 Delay0 00:06:03.142 07:48:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.142 07:48:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:03.401 NULL1 00:06:03.401 07:48:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:03.659 07:48:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2278491 00:06:03.659 07:48:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:03.659 07:48:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:03.659 07:48:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.033 Read completed with error (sct=0, sc=11) 00:06:05.033 07:48:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.033 07:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:05.033 07:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:05.306 true 00:06:05.306 07:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:05.306 07:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.245 07:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.245 07:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:06.245 07:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:06.503 true 00:06:06.503 07:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:06.503 07:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.761 07:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.761 07:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:06.761 07:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:07.019 true 00:06:07.019 07:49:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:07.019 07:49:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.953 07:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.211 07:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:08.211 07:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:08.467 true 00:06:08.467 07:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:08.467 07:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.725 07:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.725 07:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:08.725 07:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:08.983 true 00:06:08.983 07:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:08.983 07:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.359 07:49:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.359 07:49:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:10.359 07:49:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:10.617 true 00:06:10.617 07:49:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:10.617 07:49:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.552 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.552 07:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.552 07:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:11.552 07:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:11.810 true 00:06:11.810 07:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:11.810 07:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.068 07:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.326 07:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:12.326 07:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:12.326 true 00:06:12.326 07:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:12.326 07:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.702 07:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.702 07:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:13.702 07:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:13.960 true 00:06:13.960 07:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:13.960 07:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.895 07:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.895 07:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:14.895 07:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:15.154 true 00:06:15.154 07:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:15.154 07:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.412 07:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.670 07:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:15.670 07:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:15.670 true 00:06:15.670 07:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:15.670 07:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.042 07:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.042 07:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:17.042 07:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:17.301 true 00:06:17.301 07:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:17.301 07:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.236 07:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.236 07:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:18.236 07:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:18.493 true 00:06:18.493 07:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:18.493 07:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.751 07:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.009 07:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:19.009 07:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:19.009 true 00:06:19.267 07:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:19.267 07:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.201 07:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.459 07:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:20.459 07:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:20.718 true 00:06:20.718 07:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:20.718 07:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.560 07:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.560 07:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:21.560 07:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:21.818 true 00:06:21.818 07:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:21.818 07:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.076 07:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.333 07:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:22.333 07:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:22.333 true 00:06:22.333 07:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:22.333 07:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.749 07:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.749 07:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:23.749 07:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:24.007 true 00:06:24.007 07:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:24.007 07:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.940 07:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.940 07:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:24.940 07:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:25.207 true 00:06:25.207 07:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:25.207 07:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.469 07:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.469 07:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:25.469 07:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:25.730 true 00:06:25.730 07:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:25.730 07:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.777 07:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.035 07:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:27.035 07:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:27.294 true 00:06:27.294 07:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:27.294 07:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.230 07:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.230 07:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:28.230 07:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:28.489 true 00:06:28.489 07:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:28.489 07:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.747 07:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.006 07:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:29.006 07:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:29.006 true 00:06:29.006 07:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:29.264 07:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.200 07:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.458 07:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:30.458 07:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:30.458 true 00:06:30.458 07:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:30.459 07:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.718 07:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.976 07:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:30.976 07:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:31.235 true 00:06:31.235 07:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:31.235 07:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.169 07:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.427 07:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:32.427 07:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:32.684 true 00:06:32.684 07:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:32.684 07:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.620 07:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.620 07:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:33.620 07:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:33.878 Initializing NVMe Controllers 00:06:33.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:33.878 Controller IO queue size 128, less than required. 00:06:33.878 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:33.878 Controller IO queue size 128, less than required. 00:06:33.878 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:33.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:33.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:33.878 Initialization complete. Launching workers. 00:06:33.878 ======================================================== 00:06:33.878 Latency(us) 00:06:33.878 Device Information : IOPS MiB/s Average min max 00:06:33.878 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1954.70 0.95 45196.93 2968.05 1082051.11 00:06:33.878 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17538.76 8.56 7298.46 2820.35 456972.82 00:06:33.878 ======================================================== 00:06:33.878 Total : 19493.46 9.52 11098.72 2820.35 1082051.11 00:06:33.878 00:06:33.878 true 00:06:33.878 07:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2278491 00:06:33.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2278491) - No such process 00:06:33.878 07:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2278491 00:06:33.878 07:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.137 07:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:34.395 07:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:34.395 07:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:34.395 07:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:34.395 07:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:34.395 07:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:34.395 null0 00:06:34.395 07:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:34.395 07:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:34.395 07:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:34.654 null1 00:06:34.654 07:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:34.654 07:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:34.654 07:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:34.914 null2 00:06:34.914 07:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:34.914 07:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:34.914 07:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:35.173 null3 00:06:35.173 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.173 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.173 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:35.173 null4 00:06:35.431 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.431 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.431 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:35.431 null5 00:06:35.431 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.431 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.431 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:35.689 null6 00:06:35.689 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.689 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.689 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:35.951 null7 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2283948 2283949 2283950 2283953 2283955 2283957 2283958 2283961 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:35.951 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:35.952 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:35.952 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:35.952 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.952 07:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:36.210 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:36.210 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:36.210 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:36.210 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:36.210 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:36.210 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.210 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:36.210 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:36.467 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.725 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:36.983 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:36.983 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:36.983 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:36.983 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:36.983 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:36.983 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:36.983 07:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.983 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.242 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:37.500 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.500 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:37.500 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:37.500 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:37.500 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:37.500 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:37.500 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:37.500 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:37.757 07:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.035 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.036 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:38.293 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.293 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:38.293 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.293 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:38.293 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:38.293 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:38.293 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:38.293 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.551 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.809 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.810 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:38.810 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.810 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.810 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:38.810 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.810 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.810 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:38.810 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.810 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.810 07:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.067 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.067 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.067 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.067 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.067 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.067 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.067 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.067 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.324 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.325 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.582 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.582 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.582 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.582 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.582 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.582 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.582 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.582 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.840 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.841 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.841 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.841 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.841 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.841 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.841 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.841 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.841 07:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:40.099 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:40.099 rmmod nvme_tcp 00:06:40.099 rmmod nvme_fabrics 00:06:40.358 rmmod nvme_keyring 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2278079 ']' 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2278079 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2278079 ']' 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2278079 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2278079 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2278079' 00:06:40.358 killing process with pid 2278079 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2278079 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2278079 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.358 07:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:42.891 00:06:42.891 real 0m47.066s 00:06:42.891 user 3m12.468s 00:06:42.891 sys 0m15.721s 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:42.891 ************************************ 00:06:42.891 END TEST nvmf_ns_hotplug_stress 00:06:42.891 ************************************ 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:42.891 ************************************ 00:06:42.891 START TEST nvmf_delete_subsystem 00:06:42.891 ************************************ 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:42.891 * Looking for test storage... 00:06:42.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.891 --rc genhtml_branch_coverage=1 00:06:42.891 --rc genhtml_function_coverage=1 00:06:42.891 --rc genhtml_legend=1 00:06:42.891 --rc geninfo_all_blocks=1 00:06:42.891 --rc geninfo_unexecuted_blocks=1 00:06:42.891 00:06:42.891 ' 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.891 --rc genhtml_branch_coverage=1 00:06:42.891 --rc genhtml_function_coverage=1 00:06:42.891 --rc genhtml_legend=1 00:06:42.891 --rc geninfo_all_blocks=1 00:06:42.891 --rc geninfo_unexecuted_blocks=1 00:06:42.891 00:06:42.891 ' 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.891 --rc genhtml_branch_coverage=1 00:06:42.891 --rc genhtml_function_coverage=1 00:06:42.891 --rc genhtml_legend=1 00:06:42.891 --rc geninfo_all_blocks=1 00:06:42.891 --rc geninfo_unexecuted_blocks=1 00:06:42.891 00:06:42.891 ' 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.891 --rc genhtml_branch_coverage=1 00:06:42.891 --rc genhtml_function_coverage=1 00:06:42.891 --rc genhtml_legend=1 00:06:42.891 --rc geninfo_all_blocks=1 00:06:42.891 --rc geninfo_unexecuted_blocks=1 00:06:42.891 00:06:42.891 ' 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.891 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:42.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:42.892 07:49:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:48.160 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:48.160 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:48.160 Found net devices under 0000:86:00.0: cvl_0_0 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:48.160 Found net devices under 0000:86:00.1: cvl_0_1 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:48.160 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:48.419 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:48.419 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:48.419 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:48.419 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:48.419 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:48.419 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:48.419 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:48.419 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:48.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:48.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:06:48.419 00:06:48.419 --- 10.0.0.2 ping statistics --- 00:06:48.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.419 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:06:48.419 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:48.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:48.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:06:48.419 00:06:48.419 --- 10.0.0.1 ping statistics --- 00:06:48.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.420 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2288347 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2288347 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2288347 ']' 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.420 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.678 [2024-11-27 07:49:42.548666] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:06:48.678 [2024-11-27 07:49:42.548709] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.678 [2024-11-27 07:49:42.614105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.678 [2024-11-27 07:49:42.653668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.678 [2024-11-27 07:49:42.653704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.678 [2024-11-27 07:49:42.653712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.678 [2024-11-27 07:49:42.653718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.678 [2024-11-27 07:49:42.653723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.678 [2024-11-27 07:49:42.654910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.678 [2024-11-27 07:49:42.654913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.678 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.678 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:48.678 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:48.678 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:48.678 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.936 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:48.936 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:48.936 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.936 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.936 [2024-11-27 07:49:42.792351] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.936 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.937 [2024-11-27 07:49:42.812563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.937 NULL1 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.937 Delay0 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2288440 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:48.937 07:49:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:48.937 [2024-11-27 07:49:42.914370] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:50.836 07:49:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:50.836 07:49:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.836 07:49:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 Write completed with error (sct=0, sc=8) 00:06:51.095 starting I/O failed: -6 00:06:51.095 starting I/O failed: -6 00:06:51.095 starting I/O failed: -6 00:06:51.095 starting I/O failed: -6 00:06:51.095 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 starting I/O failed: -6 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 starting I/O failed: -6 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 starting I/O failed: -6 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 starting I/O failed: -6 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 starting I/O failed: -6 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 starting I/O failed: -6 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 starting I/O failed: -6 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 starting I/O failed: -6 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 starting I/O failed: -6 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 [2024-11-27 07:49:44.955939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0fe000d4d0 is same with the state(6) to be set 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 Write completed with error (sct=0, sc=8) 00:06:51.096 Read completed with error (sct=0, sc=8) 00:06:51.096 [2024-11-27 07:49:44.956235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0fe0000c40 is same with the state(6) to be set 00:06:52.032 [2024-11-27 07:49:45.927220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20959b0 is same with the state(6) to be set 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 [2024-11-27 07:49:45.959557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2094680 is same with the state(6) to be set 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 [2024-11-27 07:49:45.959728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0fe000d020 is same with the state(6) to be set 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 [2024-11-27 07:49:45.959918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20942c0 is same with the state(6) to be set 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 Read completed with error (sct=0, sc=8) 00:06:52.032 Write completed with error (sct=0, sc=8) 00:06:52.032 [2024-11-27 07:49:45.960458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0fe000d800 is same with the state(6) to be set 00:06:52.032 Initializing NVMe Controllers 00:06:52.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:52.032 Controller IO queue size 128, less than required. 00:06:52.032 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:52.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:52.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:52.032 Initialization complete. Launching workers. 00:06:52.032 ======================================================== 00:06:52.032 Latency(us) 00:06:52.032 Device Information : IOPS MiB/s Average min max 00:06:52.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 188.22 0.09 901990.37 394.59 1013476.80 00:06:52.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.47 0.07 1081627.80 305.73 2004586.95 00:06:52.033 ======================================================== 00:06:52.033 Total : 339.68 0.17 982091.86 305.73 2004586.95 00:06:52.033 00:06:52.033 [2024-11-27 07:49:45.961080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20959b0 (9): Bad file descriptor 00:06:52.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:52.033 07:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.033 07:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:52.033 07:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2288440 00:06:52.033 07:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2288440 00:06:52.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2288440) - No such process 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2288440 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2288440 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2288440 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.599 [2024-11-27 07:49:46.490936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2289061 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2289061 00:06:52.599 07:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:52.599 [2024-11-27 07:49:46.566538] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:53.165 07:49:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:53.165 07:49:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2289061 00:06:53.165 07:49:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:53.423 07:49:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:53.423 07:49:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2289061 00:06:53.423 07:49:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:53.986 07:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:53.986 07:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2289061 00:06:53.986 07:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:54.551 07:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:54.551 07:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2289061 00:06:54.551 07:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:55.129 07:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:55.129 07:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2289061 00:06:55.129 07:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:55.693 07:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:55.693 07:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2289061 00:06:55.693 07:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:55.951 Initializing NVMe Controllers 00:06:55.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:55.951 Controller IO queue size 128, less than required. 00:06:55.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:55.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:55.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:55.951 Initialization complete. Launching workers. 00:06:55.951 ======================================================== 00:06:55.951 Latency(us) 00:06:55.951 Device Information : IOPS MiB/s Average min max 00:06:55.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003518.13 1000151.99 1012481.37 00:06:55.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004763.02 1000167.09 1040499.26 00:06:55.951 ======================================================== 00:06:55.951 Total : 256.00 0.12 1004140.57 1000151.99 1040499.26 00:06:55.951 00:06:55.952 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:55.952 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2289061 00:06:55.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2289061) - No such process 00:06:55.952 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2289061 00:06:55.952 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:55.952 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:55.952 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:55.952 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:55.952 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:55.952 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:55.952 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:55.952 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:55.952 rmmod nvme_tcp 00:06:56.209 rmmod nvme_fabrics 00:06:56.209 rmmod nvme_keyring 00:06:56.209 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:56.209 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:56.209 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:56.210 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2288347 ']' 00:06:56.210 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2288347 00:06:56.210 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2288347 ']' 00:06:56.210 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2288347 00:06:56.210 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:56.210 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.210 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2288347 00:06:56.210 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.210 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.210 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2288347' 00:06:56.210 killing process with pid 2288347 00:06:56.210 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2288347 00:06:56.210 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2288347 00:06:56.469 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:56.469 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:56.469 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:56.469 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:56.469 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:56.469 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:56.469 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:56.469 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:56.469 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:56.469 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.469 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:56.469 07:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.374 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:58.374 00:06:58.374 real 0m15.824s 00:06:58.374 user 0m28.986s 00:06:58.374 sys 0m5.284s 00:06:58.374 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.374 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:58.374 ************************************ 00:06:58.374 END TEST nvmf_delete_subsystem 00:06:58.374 ************************************ 00:06:58.374 07:49:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:58.374 07:49:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:58.374 07:49:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.374 07:49:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:58.374 ************************************ 00:06:58.374 START TEST nvmf_host_management 00:06:58.374 ************************************ 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:58.634 * Looking for test storage... 00:06:58.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.634 --rc genhtml_branch_coverage=1 00:06:58.634 --rc genhtml_function_coverage=1 00:06:58.634 --rc genhtml_legend=1 00:06:58.634 --rc geninfo_all_blocks=1 00:06:58.634 --rc geninfo_unexecuted_blocks=1 00:06:58.634 00:06:58.634 ' 00:06:58.634 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.634 --rc genhtml_branch_coverage=1 00:06:58.634 --rc genhtml_function_coverage=1 00:06:58.634 --rc genhtml_legend=1 00:06:58.635 --rc geninfo_all_blocks=1 00:06:58.635 --rc geninfo_unexecuted_blocks=1 00:06:58.635 00:06:58.635 ' 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.635 --rc genhtml_branch_coverage=1 00:06:58.635 --rc genhtml_function_coverage=1 00:06:58.635 --rc genhtml_legend=1 00:06:58.635 --rc geninfo_all_blocks=1 00:06:58.635 --rc geninfo_unexecuted_blocks=1 00:06:58.635 00:06:58.635 ' 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.635 --rc genhtml_branch_coverage=1 00:06:58.635 --rc genhtml_function_coverage=1 00:06:58.635 --rc genhtml_legend=1 00:06:58.635 --rc geninfo_all_blocks=1 00:06:58.635 --rc geninfo_unexecuted_blocks=1 00:06:58.635 00:06:58.635 ' 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:58.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:58.635 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:58.636 07:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:03.912 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:03.912 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:03.912 Found net devices under 0000:86:00.0: cvl_0_0 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:03.912 Found net devices under 0000:86:00.1: cvl_0_1 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.912 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:03.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:03.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:07:03.913 00:07:03.913 --- 10.0.0.2 ping statistics --- 00:07:03.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.913 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:03.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:03.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:07:03.913 00:07:03.913 --- 10.0.0.1 ping statistics --- 00:07:03.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.913 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2293066 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2293066 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2293066 ']' 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:03.913 07:49:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:03.913 [2024-11-27 07:49:57.853569] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:07:03.913 [2024-11-27 07:49:57.853614] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.913 [2024-11-27 07:49:57.920062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.913 [2024-11-27 07:49:57.963868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:03.913 [2024-11-27 07:49:57.963907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:03.913 [2024-11-27 07:49:57.963914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:03.913 [2024-11-27 07:49:57.963921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:03.913 [2024-11-27 07:49:57.963926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:03.913 [2024-11-27 07:49:57.965465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.913 [2024-11-27 07:49:57.965555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.913 [2024-11-27 07:49:57.965663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.913 [2024-11-27 07:49:57.965664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.173 [2024-11-27 07:49:58.104554] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.173 Malloc0 00:07:04.173 [2024-11-27 07:49:58.175668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2293274 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2293274 /var/tmp/bdevperf.sock 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2293274 ']' 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:04.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:04.173 { 00:07:04.173 "params": { 00:07:04.173 "name": "Nvme$subsystem", 00:07:04.173 "trtype": "$TEST_TRANSPORT", 00:07:04.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:04.173 "adrfam": "ipv4", 00:07:04.173 "trsvcid": "$NVMF_PORT", 00:07:04.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:04.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:04.173 "hdgst": ${hdgst:-false}, 00:07:04.173 "ddgst": ${ddgst:-false} 00:07:04.173 }, 00:07:04.173 "method": "bdev_nvme_attach_controller" 00:07:04.173 } 00:07:04.173 EOF 00:07:04.173 )") 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:04.173 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:04.173 "params": { 00:07:04.173 "name": "Nvme0", 00:07:04.173 "trtype": "tcp", 00:07:04.173 "traddr": "10.0.0.2", 00:07:04.173 "adrfam": "ipv4", 00:07:04.173 "trsvcid": "4420", 00:07:04.173 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:04.173 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:04.173 "hdgst": false, 00:07:04.173 "ddgst": false 00:07:04.173 }, 00:07:04.173 "method": "bdev_nvme_attach_controller" 00:07:04.173 }' 00:07:04.173 [2024-11-27 07:49:58.272464] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:07:04.173 [2024-11-27 07:49:58.272511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2293274 ] 00:07:04.432 [2024-11-27 07:49:58.336538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.432 [2024-11-27 07:49:58.380081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.691 Running I/O for 10 seconds... 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:04.691 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:04.952 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:04.952 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:04.952 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:04.952 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:04.952 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.952 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.952 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.952 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=668 00:07:04.952 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 668 -ge 100 ']' 00:07:04.952 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:04.952 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:04.953 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:04.953 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:04.953 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.953 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.953 [2024-11-27 07:49:58.950215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b0b0 is same with the state(6) to be set 00:07:04.953 [2024-11-27 07:49:58.950250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b0b0 is same with the state(6) to be set 00:07:04.953 [2024-11-27 07:49:58.950258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b0b0 is same with the state(6) to be set 00:07:04.953 [2024-11-27 07:49:58.950264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b0b0 is same with the state(6) to be set 00:07:04.953 [2024-11-27 07:49:58.950271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b0b0 is same with the state(6) to be set 00:07:04.953 [2024-11-27 07:49:58.950278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b0b0 is same with the state(6) to be set 00:07:04.953 [2024-11-27 07:49:58.950284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b0b0 is same with the state(6) to be set 00:07:04.953 [2024-11-27 07:49:58.950290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b0b0 is same with the state(6) to be set 00:07:04.953 [2024-11-27 07:49:58.951117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:04.953 [2024-11-27 07:49:58.951152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:04.953 [2024-11-27 07:49:58.951170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:04.953 [2024-11-27 07:49:58.951185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:04.953 [2024-11-27 07:49:58.951200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245510 is same with the state(6) to be set 00:07:04.953 [2024-11-27 07:49:58.951306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.953 [2024-11-27 07:49:58.951753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.953 [2024-11-27 07:49:58.951760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.951768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.951776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.951784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.951791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.951799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.951806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.951813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.951820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.951828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.951834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.951842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.951848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.951856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.951863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.951872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.951878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.951886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.951893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.951901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.951908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.951916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.951923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.951931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.951938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.951951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.951962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.951971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.951977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.951986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.951993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.952285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.954 [2024-11-27 07:49:58.952292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.954 [2024-11-27 07:49:58.953242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:04.954 task offset: 98304 on job bdev=Nvme0n1 fails 00:07:04.954 00:07:04.954 Latency(us) 00:07:04.954 [2024-11-27T06:49:59.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:04.954 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:04.954 Job: Nvme0n1 ended in about 0.41 seconds with error 00:07:04.954 Verification LBA range: start 0x0 length 0x400 00:07:04.954 Nvme0n1 : 0.41 1879.58 117.47 156.63 0.00 30588.99 1702.51 27696.08 00:07:04.954 [2024-11-27T06:49:59.063Z] =================================================================================================================== 00:07:04.954 [2024-11-27T06:49:59.063Z] Total : 1879.58 117.47 156.63 0.00 30588.99 1702.51 27696.08 00:07:04.955 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.955 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:04.955 [2024-11-27 07:49:58.955638] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.955 [2024-11-27 07:49:58.955662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1245510 (9): Bad file descriptor 00:07:04.955 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.955 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.955 [2024-11-27 07:49:58.958051] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:04.955 [2024-11-27 07:49:58.958125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:04.955 [2024-11-27 07:49:58.958148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:04.955 [2024-11-27 07:49:58.958160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:04.955 [2024-11-27 07:49:58.958169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:04.955 [2024-11-27 07:49:58.958176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:04.955 [2024-11-27 07:49:58.958183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1245510 00:07:04.955 [2024-11-27 07:49:58.958204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1245510 (9): Bad file descriptor 00:07:04.955 [2024-11-27 07:49:58.958216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:04.955 [2024-11-27 07:49:58.958223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:04.955 [2024-11-27 07:49:58.958232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:04.955 [2024-11-27 07:49:58.958240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:04.955 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.955 07:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:05.893 07:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2293274 00:07:05.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2293274) - No such process 00:07:05.893 07:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:05.893 07:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:05.893 07:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:05.893 07:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:05.893 07:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:05.893 07:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:05.893 07:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:05.893 07:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:05.893 { 00:07:05.893 "params": { 00:07:05.893 "name": "Nvme$subsystem", 00:07:05.893 "trtype": "$TEST_TRANSPORT", 00:07:05.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:05.893 "adrfam": "ipv4", 00:07:05.893 "trsvcid": "$NVMF_PORT", 00:07:05.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:05.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:05.893 "hdgst": ${hdgst:-false}, 00:07:05.893 "ddgst": ${ddgst:-false} 00:07:05.893 }, 00:07:05.893 "method": "bdev_nvme_attach_controller" 00:07:05.893 } 00:07:05.893 EOF 00:07:05.893 )") 00:07:05.893 07:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:05.893 07:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:05.893 07:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:05.893 07:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:05.893 "params": { 00:07:05.893 "name": "Nvme0", 00:07:05.893 "trtype": "tcp", 00:07:05.893 "traddr": "10.0.0.2", 00:07:05.893 "adrfam": "ipv4", 00:07:05.893 "trsvcid": "4420", 00:07:05.893 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:05.893 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:05.893 "hdgst": false, 00:07:05.893 "ddgst": false 00:07:05.893 }, 00:07:05.893 "method": "bdev_nvme_attach_controller" 00:07:05.893 }' 00:07:06.153 [2024-11-27 07:50:00.022660] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:07:06.153 [2024-11-27 07:50:00.022711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2293594 ] 00:07:06.153 [2024-11-27 07:50:00.087508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.153 [2024-11-27 07:50:00.129487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.413 Running I/O for 1 seconds... 00:07:07.349 1920.00 IOPS, 120.00 MiB/s 00:07:07.349 Latency(us) 00:07:07.349 [2024-11-27T06:50:01.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.349 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:07.349 Verification LBA range: start 0x0 length 0x400 00:07:07.349 Nvme0n1 : 1.01 1968.89 123.06 0.00 0.00 31993.34 6268.66 28151.99 00:07:07.349 [2024-11-27T06:50:01.458Z] =================================================================================================================== 00:07:07.349 [2024-11-27T06:50:01.458Z] Total : 1968.89 123.06 0.00 0.00 31993.34 6268.66 28151.99 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:07.608 rmmod nvme_tcp 00:07:07.608 rmmod nvme_fabrics 00:07:07.608 rmmod nvme_keyring 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2293066 ']' 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2293066 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2293066 ']' 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2293066 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2293066 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2293066' 00:07:07.608 killing process with pid 2293066 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2293066 00:07:07.608 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2293066 00:07:07.867 [2024-11-27 07:50:01.794149] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:07.867 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:07.867 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:07.867 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:07.867 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:07.867 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:07.867 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:07.867 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:07.867 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:07.867 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:07.867 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.867 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.867 07:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.405 07:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:10.405 07:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:10.405 00:07:10.405 real 0m11.415s 00:07:10.405 user 0m18.832s 00:07:10.405 sys 0m4.993s 00:07:10.405 07:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.405 07:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.405 ************************************ 00:07:10.405 END TEST nvmf_host_management 00:07:10.405 ************************************ 00:07:10.405 07:50:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:10.405 07:50:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.405 07:50:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.405 07:50:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:10.405 ************************************ 00:07:10.405 START TEST nvmf_lvol 00:07:10.405 ************************************ 00:07:10.405 07:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:10.405 * Looking for test storage... 00:07:10.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:10.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.405 --rc genhtml_branch_coverage=1 00:07:10.405 --rc genhtml_function_coverage=1 00:07:10.405 --rc genhtml_legend=1 00:07:10.405 --rc geninfo_all_blocks=1 00:07:10.405 --rc geninfo_unexecuted_blocks=1 00:07:10.405 00:07:10.405 ' 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:10.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.405 --rc genhtml_branch_coverage=1 00:07:10.405 --rc genhtml_function_coverage=1 00:07:10.405 --rc genhtml_legend=1 00:07:10.405 --rc geninfo_all_blocks=1 00:07:10.405 --rc geninfo_unexecuted_blocks=1 00:07:10.405 00:07:10.405 ' 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:10.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.405 --rc genhtml_branch_coverage=1 00:07:10.405 --rc genhtml_function_coverage=1 00:07:10.405 --rc genhtml_legend=1 00:07:10.405 --rc geninfo_all_blocks=1 00:07:10.405 --rc geninfo_unexecuted_blocks=1 00:07:10.405 00:07:10.405 ' 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:10.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.405 --rc genhtml_branch_coverage=1 00:07:10.405 --rc genhtml_function_coverage=1 00:07:10.405 --rc genhtml_legend=1 00:07:10.405 --rc geninfo_all_blocks=1 00:07:10.405 --rc geninfo_unexecuted_blocks=1 00:07:10.405 00:07:10.405 ' 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.405 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:10.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:10.406 07:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.749 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:15.750 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:15.750 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:15.750 Found net devices under 0000:86:00.0: cvl_0_0 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:15.750 Found net devices under 0000:86:00.1: cvl_0_1 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:15.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:07:15.750 00:07:15.750 --- 10.0.0.2 ping statistics --- 00:07:15.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.750 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:07:15.750 00:07:15.750 --- 10.0.0.1 ping statistics --- 00:07:15.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.750 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2297360 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2297360 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2297360 ']' 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.750 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.750 [2024-11-27 07:50:09.668485] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:07:15.750 [2024-11-27 07:50:09.668526] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.750 [2024-11-27 07:50:09.736595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.750 [2024-11-27 07:50:09.776642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.750 [2024-11-27 07:50:09.776682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.751 [2024-11-27 07:50:09.776689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.751 [2024-11-27 07:50:09.776695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.751 [2024-11-27 07:50:09.776700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.751 [2024-11-27 07:50:09.777992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.751 [2024-11-27 07:50:09.778027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.751 [2024-11-27 07:50:09.778029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.044 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.044 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:16.044 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:16.044 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:16.044 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.044 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.044 07:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:16.044 [2024-11-27 07:50:10.089636] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.044 07:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:16.303 07:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:16.303 07:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:16.562 07:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:16.562 07:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:16.821 07:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:17.080 07:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=16c304d3-f3db-4a1e-8519-35444cf93b89 00:07:17.080 07:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 16c304d3-f3db-4a1e-8519-35444cf93b89 lvol 20 00:07:17.080 07:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=cf110298-5c41-4c9b-87a6-329079fc317a 00:07:17.080 07:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:17.338 07:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cf110298-5c41-4c9b-87a6-329079fc317a 00:07:17.597 07:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:17.855 [2024-11-27 07:50:11.718614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.855 07:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:17.855 07:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2297685 00:07:17.855 07:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:17.855 07:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:19.232 07:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot cf110298-5c41-4c9b-87a6-329079fc317a MY_SNAPSHOT 00:07:19.232 07:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=23ecc64e-134a-4765-a621-8f8d758d61d5 00:07:19.233 07:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize cf110298-5c41-4c9b-87a6-329079fc317a 30 00:07:19.491 07:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 23ecc64e-134a-4765-a621-8f8d758d61d5 MY_CLONE 00:07:19.751 07:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c8ad9776-71b2-4794-b85f-9d0556f0f10f 00:07:19.751 07:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c8ad9776-71b2-4794-b85f-9d0556f0f10f 00:07:20.319 07:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2297685 00:07:28.440 Initializing NVMe Controllers 00:07:28.440 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:28.440 Controller IO queue size 128, less than required. 00:07:28.440 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:28.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:28.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:28.440 Initialization complete. Launching workers. 00:07:28.440 ======================================================== 00:07:28.440 Latency(us) 00:07:28.440 Device Information : IOPS MiB/s Average min max 00:07:28.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11923.50 46.58 10739.82 1830.38 58816.84 00:07:28.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11846.60 46.28 10807.59 3001.05 56760.90 00:07:28.440 ======================================================== 00:07:28.440 Total : 23770.10 92.85 10773.59 1830.38 58816.84 00:07:28.440 00:07:28.440 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:28.440 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cf110298-5c41-4c9b-87a6-329079fc317a 00:07:28.699 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 16c304d3-f3db-4a1e-8519-35444cf93b89 00:07:28.699 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:28.699 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:28.699 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:28.699 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:28.699 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:28.699 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:28.699 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:28.699 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:28.699 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:28.699 rmmod nvme_tcp 00:07:28.958 rmmod nvme_fabrics 00:07:28.958 rmmod nvme_keyring 00:07:28.958 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:28.958 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:28.958 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:28.958 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2297360 ']' 00:07:28.958 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2297360 00:07:28.958 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2297360 ']' 00:07:28.958 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2297360 00:07:28.958 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:28.958 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.958 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2297360 00:07:28.958 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.958 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.958 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2297360' 00:07:28.958 killing process with pid 2297360 00:07:28.958 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2297360 00:07:28.958 07:50:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2297360 00:07:29.217 07:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:29.217 07:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:29.217 07:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:29.217 07:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:29.217 07:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:29.217 07:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:29.217 07:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:29.217 07:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:29.217 07:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:29.217 07:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.217 07:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.217 07:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.121 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:31.121 00:07:31.121 real 0m21.215s 00:07:31.121 user 1m2.279s 00:07:31.121 sys 0m7.258s 00:07:31.121 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.121 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:31.121 ************************************ 00:07:31.121 END TEST nvmf_lvol 00:07:31.121 ************************************ 00:07:31.121 07:50:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:31.121 07:50:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:31.121 07:50:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.121 07:50:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:31.380 ************************************ 00:07:31.380 START TEST nvmf_lvs_grow 00:07:31.380 ************************************ 00:07:31.380 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:31.380 * Looking for test storage... 00:07:31.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:31.380 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:31.380 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:31.380 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:31.380 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:31.380 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.380 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.380 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.380 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:31.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.381 --rc genhtml_branch_coverage=1 00:07:31.381 --rc genhtml_function_coverage=1 00:07:31.381 --rc genhtml_legend=1 00:07:31.381 --rc geninfo_all_blocks=1 00:07:31.381 --rc geninfo_unexecuted_blocks=1 00:07:31.381 00:07:31.381 ' 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:31.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.381 --rc genhtml_branch_coverage=1 00:07:31.381 --rc genhtml_function_coverage=1 00:07:31.381 --rc genhtml_legend=1 00:07:31.381 --rc geninfo_all_blocks=1 00:07:31.381 --rc geninfo_unexecuted_blocks=1 00:07:31.381 00:07:31.381 ' 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:31.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.381 --rc genhtml_branch_coverage=1 00:07:31.381 --rc genhtml_function_coverage=1 00:07:31.381 --rc genhtml_legend=1 00:07:31.381 --rc geninfo_all_blocks=1 00:07:31.381 --rc geninfo_unexecuted_blocks=1 00:07:31.381 00:07:31.381 ' 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:31.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.381 --rc genhtml_branch_coverage=1 00:07:31.381 --rc genhtml_function_coverage=1 00:07:31.381 --rc genhtml_legend=1 00:07:31.381 --rc geninfo_all_blocks=1 00:07:31.381 --rc geninfo_unexecuted_blocks=1 00:07:31.381 00:07:31.381 ' 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:31.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.381 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:31.382 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:31.382 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:31.382 07:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:36.656 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.656 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:36.657 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:36.657 Found net devices under 0000:86:00.0: cvl_0_0 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:36.657 Found net devices under 0000:86:00.1: cvl_0_1 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:36.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:07:36.657 00:07:36.657 --- 10.0.0.2 ping statistics --- 00:07:36.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.657 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:36.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:07:36.657 00:07:36.657 --- 10.0.0.1 ping statistics --- 00:07:36.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.657 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2303019 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2303019 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2303019 ']' 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:36.657 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:36.657 [2024-11-27 07:50:30.615374] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:07:36.657 [2024-11-27 07:50:30.615416] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.657 [2024-11-27 07:50:30.682316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.657 [2024-11-27 07:50:30.722823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.657 [2024-11-27 07:50:30.722857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.657 [2024-11-27 07:50:30.722868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.657 [2024-11-27 07:50:30.722874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.657 [2024-11-27 07:50:30.722880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.657 [2024-11-27 07:50:30.723433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.916 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.917 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:36.917 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:36.917 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:36.917 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:36.917 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.917 07:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:37.176 [2024-11-27 07:50:31.025466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.176 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:37.176 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.176 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.176 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:37.176 ************************************ 00:07:37.176 START TEST lvs_grow_clean 00:07:37.176 ************************************ 00:07:37.176 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:37.176 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:37.176 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:37.176 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:37.176 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:37.176 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:37.176 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:37.176 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:37.176 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:37.176 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:37.435 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:37.435 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:37.435 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b2ce9638-e794-4c8b-9207-1dacbf489d86 00:07:37.435 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ce9638-e794-4c8b-9207-1dacbf489d86 00:07:37.435 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:37.694 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:37.694 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:37.694 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b2ce9638-e794-4c8b-9207-1dacbf489d86 lvol 150 00:07:37.953 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=77b3ac56-1d3a-425f-b0c9-408b16c355a4 00:07:37.953 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:37.953 07:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:37.953 [2024-11-27 07:50:32.036373] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:37.953 [2024-11-27 07:50:32.036425] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:37.953 true 00:07:37.953 07:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ce9638-e794-4c8b-9207-1dacbf489d86 00:07:37.953 07:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:38.212 07:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:38.212 07:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:38.471 07:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 77b3ac56-1d3a-425f-b0c9-408b16c355a4 00:07:38.730 07:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:38.730 [2024-11-27 07:50:32.770574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.730 07:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:38.989 07:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2303516 00:07:38.989 07:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:38.989 07:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:38.989 07:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2303516 /var/tmp/bdevperf.sock 00:07:38.989 07:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2303516 ']' 00:07:38.989 07:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:38.989 07:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.989 07:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:38.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:38.989 07:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.989 07:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:38.989 [2024-11-27 07:50:33.012617] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:07:38.989 [2024-11-27 07:50:33.012680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2303516 ] 00:07:38.989 [2024-11-27 07:50:33.073929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.248 [2024-11-27 07:50:33.115402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.248 07:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.248 07:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:39.248 07:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:39.506 Nvme0n1 00:07:39.766 07:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:39.766 [ 00:07:39.766 { 00:07:39.766 "name": "Nvme0n1", 00:07:39.766 "aliases": [ 00:07:39.766 "77b3ac56-1d3a-425f-b0c9-408b16c355a4" 00:07:39.766 ], 00:07:39.766 "product_name": "NVMe disk", 00:07:39.766 "block_size": 4096, 00:07:39.766 "num_blocks": 38912, 00:07:39.766 "uuid": "77b3ac56-1d3a-425f-b0c9-408b16c355a4", 00:07:39.766 "numa_id": 1, 00:07:39.766 "assigned_rate_limits": { 00:07:39.766 "rw_ios_per_sec": 0, 00:07:39.766 "rw_mbytes_per_sec": 0, 00:07:39.766 "r_mbytes_per_sec": 0, 00:07:39.766 "w_mbytes_per_sec": 0 00:07:39.766 }, 00:07:39.766 "claimed": false, 00:07:39.766 "zoned": false, 00:07:39.766 "supported_io_types": { 00:07:39.766 "read": true, 00:07:39.766 "write": true, 00:07:39.766 "unmap": true, 00:07:39.766 "flush": true, 00:07:39.766 "reset": true, 00:07:39.766 "nvme_admin": true, 00:07:39.766 "nvme_io": true, 00:07:39.766 "nvme_io_md": false, 00:07:39.766 "write_zeroes": true, 00:07:39.766 "zcopy": false, 00:07:39.766 "get_zone_info": false, 00:07:39.766 "zone_management": false, 00:07:39.766 "zone_append": false, 00:07:39.766 "compare": true, 00:07:39.766 "compare_and_write": true, 00:07:39.766 "abort": true, 00:07:39.766 "seek_hole": false, 00:07:39.766 "seek_data": false, 00:07:39.766 "copy": true, 00:07:39.766 "nvme_iov_md": false 00:07:39.766 }, 00:07:39.766 "memory_domains": [ 00:07:39.766 { 00:07:39.766 "dma_device_id": "system", 00:07:39.766 "dma_device_type": 1 00:07:39.766 } 00:07:39.766 ], 00:07:39.766 "driver_specific": { 00:07:39.766 "nvme": [ 00:07:39.766 { 00:07:39.766 "trid": { 00:07:39.766 "trtype": "TCP", 00:07:39.766 "adrfam": "IPv4", 00:07:39.766 "traddr": "10.0.0.2", 00:07:39.766 "trsvcid": "4420", 00:07:39.766 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:39.766 }, 00:07:39.766 "ctrlr_data": { 00:07:39.766 "cntlid": 1, 00:07:39.766 "vendor_id": "0x8086", 00:07:39.766 "model_number": "SPDK bdev Controller", 00:07:39.766 "serial_number": "SPDK0", 00:07:39.766 "firmware_revision": "25.01", 00:07:39.766 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:39.766 "oacs": { 00:07:39.766 "security": 0, 00:07:39.766 "format": 0, 00:07:39.766 "firmware": 0, 00:07:39.766 "ns_manage": 0 00:07:39.766 }, 00:07:39.766 "multi_ctrlr": true, 00:07:39.766 "ana_reporting": false 00:07:39.766 }, 00:07:39.766 "vs": { 00:07:39.766 "nvme_version": "1.3" 00:07:39.766 }, 00:07:39.766 "ns_data": { 00:07:39.766 "id": 1, 00:07:39.766 "can_share": true 00:07:39.766 } 00:07:39.766 } 00:07:39.766 ], 00:07:39.766 "mp_policy": "active_passive" 00:07:39.766 } 00:07:39.766 } 00:07:39.766 ] 00:07:39.766 07:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2303534 00:07:39.766 07:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:39.766 07:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:40.025 Running I/O for 10 seconds... 00:07:40.961 Latency(us) 00:07:40.961 [2024-11-27T06:50:35.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.962 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.962 Nvme0n1 : 1.00 22550.00 88.09 0.00 0.00 0.00 0.00 0.00 00:07:40.962 [2024-11-27T06:50:35.071Z] =================================================================================================================== 00:07:40.962 [2024-11-27T06:50:35.071Z] Total : 22550.00 88.09 0.00 0.00 0.00 0.00 0.00 00:07:40.962 00:07:41.898 07:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b2ce9638-e794-4c8b-9207-1dacbf489d86 00:07:41.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.898 Nvme0n1 : 2.00 22665.00 88.54 0.00 0.00 0.00 0.00 0.00 00:07:41.898 [2024-11-27T06:50:36.007Z] =================================================================================================================== 00:07:41.898 [2024-11-27T06:50:36.007Z] Total : 22665.00 88.54 0.00 0.00 0.00 0.00 0.00 00:07:41.898 00:07:42.157 true 00:07:42.157 07:50:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ce9638-e794-4c8b-9207-1dacbf489d86 00:07:42.157 07:50:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:42.157 07:50:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:42.157 07:50:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:42.157 07:50:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2303534 00:07:43.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.092 Nvme0n1 : 3.00 22726.00 88.77 0.00 0.00 0.00 0.00 0.00 00:07:43.092 [2024-11-27T06:50:37.201Z] =================================================================================================================== 00:07:43.092 [2024-11-27T06:50:37.201Z] Total : 22726.00 88.77 0.00 0.00 0.00 0.00 0.00 00:07:43.092 00:07:44.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.024 Nvme0n1 : 4.00 22802.00 89.07 0.00 0.00 0.00 0.00 0.00 00:07:44.024 [2024-11-27T06:50:38.133Z] =================================================================================================================== 00:07:44.024 [2024-11-27T06:50:38.133Z] Total : 22802.00 89.07 0.00 0.00 0.00 0.00 0.00 00:07:44.024 00:07:44.958 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.958 Nvme0n1 : 5.00 22850.80 89.26 0.00 0.00 0.00 0.00 0.00 00:07:44.958 [2024-11-27T06:50:39.067Z] =================================================================================================================== 00:07:44.958 [2024-11-27T06:50:39.067Z] Total : 22850.80 89.26 0.00 0.00 0.00 0.00 0.00 00:07:44.958 00:07:45.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.896 Nvme0n1 : 6.00 22865.00 89.32 0.00 0.00 0.00 0.00 0.00 00:07:45.896 [2024-11-27T06:50:40.005Z] =================================================================================================================== 00:07:45.896 [2024-11-27T06:50:40.005Z] Total : 22865.00 89.32 0.00 0.00 0.00 0.00 0.00 00:07:45.896 00:07:46.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.832 Nvme0n1 : 7.00 22825.86 89.16 0.00 0.00 0.00 0.00 0.00 00:07:46.832 [2024-11-27T06:50:40.941Z] =================================================================================================================== 00:07:46.832 [2024-11-27T06:50:40.941Z] Total : 22825.86 89.16 0.00 0.00 0.00 0.00 0.00 00:07:46.832 00:07:48.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.209 Nvme0n1 : 8.00 22850.25 89.26 0.00 0.00 0.00 0.00 0.00 00:07:48.209 [2024-11-27T06:50:42.318Z] =================================================================================================================== 00:07:48.209 [2024-11-27T06:50:42.318Z] Total : 22850.25 89.26 0.00 0.00 0.00 0.00 0.00 00:07:48.209 00:07:49.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.146 Nvme0n1 : 9.00 22870.11 89.34 0.00 0.00 0.00 0.00 0.00 00:07:49.146 [2024-11-27T06:50:43.255Z] =================================================================================================================== 00:07:49.146 [2024-11-27T06:50:43.255Z] Total : 22870.11 89.34 0.00 0.00 0.00 0.00 0.00 00:07:49.146 00:07:50.083 00:07:50.083 Latency(us) 00:07:50.083 [2024-11-27T06:50:44.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.083 Nvme0n1 : 10.00 22895.04 89.43 0.00 0.00 5587.68 3219.81 11226.60 00:07:50.083 [2024-11-27T06:50:44.192Z] =================================================================================================================== 00:07:50.083 [2024-11-27T06:50:44.192Z] Total : 22895.04 89.43 0.00 0.00 5587.68 3219.81 11226.60 00:07:50.083 { 00:07:50.083 "results": [ 00:07:50.083 { 00:07:50.083 "job": "Nvme0n1", 00:07:50.083 "core_mask": "0x2", 00:07:50.083 "workload": "randwrite", 00:07:50.083 "status": "finished", 00:07:50.083 "queue_depth": 128, 00:07:50.083 "io_size": 4096, 00:07:50.083 "runtime": 10.001729, 00:07:50.083 "iops": 22895.041447333755, 00:07:50.083 "mibps": 89.43375565364748, 00:07:50.083 "io_failed": 0, 00:07:50.083 "io_timeout": 0, 00:07:50.083 "avg_latency_us": 5587.678434683877, 00:07:50.083 "min_latency_us": 3219.8121739130434, 00:07:50.083 "max_latency_us": 11226.601739130434 00:07:50.083 } 00:07:50.083 ], 00:07:50.083 "core_count": 1 00:07:50.083 } 00:07:50.083 07:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2303516 00:07:50.083 07:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2303516 ']' 00:07:50.083 07:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2303516 00:07:50.083 07:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:50.083 07:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.083 07:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2303516 00:07:50.083 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:50.083 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:50.083 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2303516' 00:07:50.083 killing process with pid 2303516 00:07:50.083 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2303516 00:07:50.083 Received shutdown signal, test time was about 10.000000 seconds 00:07:50.083 00:07:50.083 Latency(us) 00:07:50.083 [2024-11-27T06:50:44.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.083 [2024-11-27T06:50:44.192Z] =================================================================================================================== 00:07:50.083 [2024-11-27T06:50:44.192Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:50.083 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2303516 00:07:50.083 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:50.342 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:50.601 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ce9638-e794-4c8b-9207-1dacbf489d86 00:07:50.601 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:50.860 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:50.860 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:50.860 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:50.860 [2024-11-27 07:50:44.921575] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:50.860 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ce9638-e794-4c8b-9207-1dacbf489d86 00:07:50.860 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:50.860 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ce9638-e794-4c8b-9207-1dacbf489d86 00:07:50.860 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:50.860 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.860 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.119 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.119 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.119 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.119 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.119 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:51.119 07:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ce9638-e794-4c8b-9207-1dacbf489d86 00:07:51.119 request: 00:07:51.119 { 00:07:51.119 "uuid": "b2ce9638-e794-4c8b-9207-1dacbf489d86", 00:07:51.119 "method": "bdev_lvol_get_lvstores", 00:07:51.119 "req_id": 1 00:07:51.119 } 00:07:51.119 Got JSON-RPC error response 00:07:51.119 response: 00:07:51.119 { 00:07:51.119 "code": -19, 00:07:51.119 "message": "No such device" 00:07:51.119 } 00:07:51.119 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:51.119 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:51.119 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:51.119 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:51.119 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:51.378 aio_bdev 00:07:51.378 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 77b3ac56-1d3a-425f-b0c9-408b16c355a4 00:07:51.379 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=77b3ac56-1d3a-425f-b0c9-408b16c355a4 00:07:51.379 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:51.379 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:51.379 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:51.379 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:51.379 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:51.638 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 77b3ac56-1d3a-425f-b0c9-408b16c355a4 -t 2000 00:07:51.638 [ 00:07:51.638 { 00:07:51.638 "name": "77b3ac56-1d3a-425f-b0c9-408b16c355a4", 00:07:51.638 "aliases": [ 00:07:51.638 "lvs/lvol" 00:07:51.638 ], 00:07:51.638 "product_name": "Logical Volume", 00:07:51.638 "block_size": 4096, 00:07:51.638 "num_blocks": 38912, 00:07:51.638 "uuid": "77b3ac56-1d3a-425f-b0c9-408b16c355a4", 00:07:51.638 "assigned_rate_limits": { 00:07:51.638 "rw_ios_per_sec": 0, 00:07:51.638 "rw_mbytes_per_sec": 0, 00:07:51.638 "r_mbytes_per_sec": 0, 00:07:51.638 "w_mbytes_per_sec": 0 00:07:51.638 }, 00:07:51.638 "claimed": false, 00:07:51.638 "zoned": false, 00:07:51.638 "supported_io_types": { 00:07:51.638 "read": true, 00:07:51.638 "write": true, 00:07:51.638 "unmap": true, 00:07:51.638 "flush": false, 00:07:51.638 "reset": true, 00:07:51.638 "nvme_admin": false, 00:07:51.638 "nvme_io": false, 00:07:51.638 "nvme_io_md": false, 00:07:51.638 "write_zeroes": true, 00:07:51.638 "zcopy": false, 00:07:51.638 "get_zone_info": false, 00:07:51.638 "zone_management": false, 00:07:51.638 "zone_append": false, 00:07:51.638 "compare": false, 00:07:51.638 "compare_and_write": false, 00:07:51.638 "abort": false, 00:07:51.638 "seek_hole": true, 00:07:51.638 "seek_data": true, 00:07:51.638 "copy": false, 00:07:51.638 "nvme_iov_md": false 00:07:51.638 }, 00:07:51.638 "driver_specific": { 00:07:51.638 "lvol": { 00:07:51.638 "lvol_store_uuid": "b2ce9638-e794-4c8b-9207-1dacbf489d86", 00:07:51.638 "base_bdev": "aio_bdev", 00:07:51.638 "thin_provision": false, 00:07:51.638 "num_allocated_clusters": 38, 00:07:51.638 "snapshot": false, 00:07:51.638 "clone": false, 00:07:51.638 "esnap_clone": false 00:07:51.638 } 00:07:51.638 } 00:07:51.638 } 00:07:51.638 ] 00:07:51.638 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:51.638 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ce9638-e794-4c8b-9207-1dacbf489d86 00:07:51.638 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:51.897 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:51.897 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2ce9638-e794-4c8b-9207-1dacbf489d86 00:07:51.897 07:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:52.155 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:52.155 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 77b3ac56-1d3a-425f-b0c9-408b16c355a4 00:07:52.414 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b2ce9638-e794-4c8b-9207-1dacbf489d86 00:07:52.414 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:52.673 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:52.673 00:07:52.673 real 0m15.655s 00:07:52.673 user 0m15.232s 00:07:52.673 sys 0m1.444s 00:07:52.673 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.673 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:52.673 ************************************ 00:07:52.673 END TEST lvs_grow_clean 00:07:52.673 ************************************ 00:07:52.673 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:52.673 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:52.673 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.673 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.932 ************************************ 00:07:52.932 START TEST lvs_grow_dirty 00:07:52.932 ************************************ 00:07:52.932 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:52.932 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:52.932 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:52.932 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:52.932 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:52.932 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:52.932 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:52.932 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:52.932 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:52.932 07:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:52.932 07:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:52.932 07:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:53.191 07:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a2ca8fc1-bd50-4cfc-bac3-8c22933a1286 00:07:53.191 07:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2ca8fc1-bd50-4cfc-bac3-8c22933a1286 00:07:53.191 07:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:53.449 07:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:53.449 07:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:53.449 07:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a2ca8fc1-bd50-4cfc-bac3-8c22933a1286 lvol 150 00:07:53.707 07:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5cd40bb9-6dca-4991-9278-9771e4cab8c3 00:07:53.707 07:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:53.707 07:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:53.707 [2024-11-27 07:50:47.760816] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:53.707 [2024-11-27 07:50:47.760866] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:53.707 true 00:07:53.707 07:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2ca8fc1-bd50-4cfc-bac3-8c22933a1286 00:07:53.707 07:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:53.965 07:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:53.965 07:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:54.224 07:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5cd40bb9-6dca-4991-9278-9771e4cab8c3 00:07:54.482 07:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:54.482 [2024-11-27 07:50:48.523082] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.482 07:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:54.740 07:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2306120 00:07:54.740 07:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:54.740 07:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:54.740 07:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2306120 /var/tmp/bdevperf.sock 00:07:54.740 07:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2306120 ']' 00:07:54.740 07:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:54.740 07:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.740 07:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:54.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:54.740 07:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.740 07:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:54.740 [2024-11-27 07:50:48.779150] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:07:54.740 [2024-11-27 07:50:48.779210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2306120 ] 00:07:54.740 [2024-11-27 07:50:48.841835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.999 [2024-11-27 07:50:48.884453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.000 07:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.000 07:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:55.000 07:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:55.257 Nvme0n1 00:07:55.257 07:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:55.516 [ 00:07:55.516 { 00:07:55.516 "name": "Nvme0n1", 00:07:55.516 "aliases": [ 00:07:55.516 "5cd40bb9-6dca-4991-9278-9771e4cab8c3" 00:07:55.516 ], 00:07:55.516 "product_name": "NVMe disk", 00:07:55.516 "block_size": 4096, 00:07:55.516 "num_blocks": 38912, 00:07:55.516 "uuid": "5cd40bb9-6dca-4991-9278-9771e4cab8c3", 00:07:55.516 "numa_id": 1, 00:07:55.516 "assigned_rate_limits": { 00:07:55.516 "rw_ios_per_sec": 0, 00:07:55.516 "rw_mbytes_per_sec": 0, 00:07:55.516 "r_mbytes_per_sec": 0, 00:07:55.516 "w_mbytes_per_sec": 0 00:07:55.516 }, 00:07:55.516 "claimed": false, 00:07:55.516 "zoned": false, 00:07:55.516 "supported_io_types": { 00:07:55.516 "read": true, 00:07:55.516 "write": true, 00:07:55.516 "unmap": true, 00:07:55.516 "flush": true, 00:07:55.516 "reset": true, 00:07:55.516 "nvme_admin": true, 00:07:55.516 "nvme_io": true, 00:07:55.516 "nvme_io_md": false, 00:07:55.516 "write_zeroes": true, 00:07:55.516 "zcopy": false, 00:07:55.516 "get_zone_info": false, 00:07:55.516 "zone_management": false, 00:07:55.516 "zone_append": false, 00:07:55.516 "compare": true, 00:07:55.516 "compare_and_write": true, 00:07:55.516 "abort": true, 00:07:55.516 "seek_hole": false, 00:07:55.516 "seek_data": false, 00:07:55.516 "copy": true, 00:07:55.516 "nvme_iov_md": false 00:07:55.516 }, 00:07:55.516 "memory_domains": [ 00:07:55.516 { 00:07:55.516 "dma_device_id": "system", 00:07:55.516 "dma_device_type": 1 00:07:55.516 } 00:07:55.516 ], 00:07:55.516 "driver_specific": { 00:07:55.516 "nvme": [ 00:07:55.516 { 00:07:55.516 "trid": { 00:07:55.516 "trtype": "TCP", 00:07:55.516 "adrfam": "IPv4", 00:07:55.516 "traddr": "10.0.0.2", 00:07:55.516 "trsvcid": "4420", 00:07:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:55.516 }, 00:07:55.516 "ctrlr_data": { 00:07:55.516 "cntlid": 1, 00:07:55.516 "vendor_id": "0x8086", 00:07:55.516 "model_number": "SPDK bdev Controller", 00:07:55.516 "serial_number": "SPDK0", 00:07:55.516 "firmware_revision": "25.01", 00:07:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:55.516 "oacs": { 00:07:55.516 "security": 0, 00:07:55.516 "format": 0, 00:07:55.516 "firmware": 0, 00:07:55.516 "ns_manage": 0 00:07:55.516 }, 00:07:55.516 "multi_ctrlr": true, 00:07:55.516 "ana_reporting": false 00:07:55.516 }, 00:07:55.516 "vs": { 00:07:55.516 "nvme_version": "1.3" 00:07:55.516 }, 00:07:55.516 "ns_data": { 00:07:55.516 "id": 1, 00:07:55.516 "can_share": true 00:07:55.516 } 00:07:55.516 } 00:07:55.516 ], 00:07:55.516 "mp_policy": "active_passive" 00:07:55.516 } 00:07:55.516 } 00:07:55.516 ] 00:07:55.516 07:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2306306 00:07:55.516 07:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:55.517 07:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:55.517 Running I/O for 10 seconds... 00:07:56.895 Latency(us) 00:07:56.895 [2024-11-27T06:50:51.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.895 Nvme0n1 : 1.00 22562.00 88.13 0.00 0.00 0.00 0.00 0.00 00:07:56.895 [2024-11-27T06:50:51.004Z] =================================================================================================================== 00:07:56.895 [2024-11-27T06:50:51.004Z] Total : 22562.00 88.13 0.00 0.00 0.00 0.00 0.00 00:07:56.895 00:07:57.463 07:50:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a2ca8fc1-bd50-4cfc-bac3-8c22933a1286 00:07:57.463 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.463 Nvme0n1 : 2.00 22811.50 89.11 0.00 0.00 0.00 0.00 0.00 00:07:57.463 [2024-11-27T06:50:51.572Z] =================================================================================================================== 00:07:57.463 [2024-11-27T06:50:51.572Z] Total : 22811.50 89.11 0.00 0.00 0.00 0.00 0.00 00:07:57.463 00:07:57.722 true 00:07:57.722 07:50:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2ca8fc1-bd50-4cfc-bac3-8c22933a1286 00:07:57.722 07:50:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:57.982 07:50:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:57.982 07:50:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:57.982 07:50:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2306306 00:07:58.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.550 Nvme0n1 : 3.00 22870.00 89.34 0.00 0.00 0.00 0.00 0.00 00:07:58.550 [2024-11-27T06:50:52.659Z] =================================================================================================================== 00:07:58.550 [2024-11-27T06:50:52.659Z] Total : 22870.00 89.34 0.00 0.00 0.00 0.00 0.00 00:07:58.550 00:07:59.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.488 Nvme0n1 : 4.00 22932.50 89.58 0.00 0.00 0.00 0.00 0.00 00:07:59.488 [2024-11-27T06:50:53.597Z] =================================================================================================================== 00:07:59.488 [2024-11-27T06:50:53.597Z] Total : 22932.50 89.58 0.00 0.00 0.00 0.00 0.00 00:07:59.488 00:08:00.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.865 Nvme0n1 : 5.00 22995.60 89.83 0.00 0.00 0.00 0.00 0.00 00:08:00.865 [2024-11-27T06:50:54.975Z] =================================================================================================================== 00:08:00.866 [2024-11-27T06:50:54.975Z] Total : 22995.60 89.83 0.00 0.00 0.00 0.00 0.00 00:08:00.866 00:08:01.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.802 Nvme0n1 : 6.00 23024.67 89.94 0.00 0.00 0.00 0.00 0.00 00:08:01.802 [2024-11-27T06:50:55.911Z] =================================================================================================================== 00:08:01.802 [2024-11-27T06:50:55.911Z] Total : 23024.67 89.94 0.00 0.00 0.00 0.00 0.00 00:08:01.802 00:08:02.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.738 Nvme0n1 : 7.00 23046.71 90.03 0.00 0.00 0.00 0.00 0.00 00:08:02.738 [2024-11-27T06:50:56.847Z] =================================================================================================================== 00:08:02.738 [2024-11-27T06:50:56.847Z] Total : 23046.71 90.03 0.00 0.00 0.00 0.00 0.00 00:08:02.738 00:08:03.674 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.674 Nvme0n1 : 8.00 23079.12 90.15 0.00 0.00 0.00 0.00 0.00 00:08:03.674 [2024-11-27T06:50:57.783Z] =================================================================================================================== 00:08:03.674 [2024-11-27T06:50:57.783Z] Total : 23079.12 90.15 0.00 0.00 0.00 0.00 0.00 00:08:03.674 00:08:04.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.610 Nvme0n1 : 9.00 23104.33 90.25 0.00 0.00 0.00 0.00 0.00 00:08:04.610 [2024-11-27T06:50:58.719Z] =================================================================================================================== 00:08:04.610 [2024-11-27T06:50:58.719Z] Total : 23104.33 90.25 0.00 0.00 0.00 0.00 0.00 00:08:04.610 00:08:05.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.577 Nvme0n1 : 10.00 23112.30 90.28 0.00 0.00 0.00 0.00 0.00 00:08:05.577 [2024-11-27T06:50:59.686Z] =================================================================================================================== 00:08:05.577 [2024-11-27T06:50:59.686Z] Total : 23112.30 90.28 0.00 0.00 0.00 0.00 0.00 00:08:05.577 00:08:05.577 00:08:05.577 Latency(us) 00:08:05.577 [2024-11-27T06:50:59.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.577 Nvme0n1 : 10.00 23111.90 90.28 0.00 0.00 5534.93 3319.54 12195.39 00:08:05.577 [2024-11-27T06:50:59.686Z] =================================================================================================================== 00:08:05.577 [2024-11-27T06:50:59.686Z] Total : 23111.90 90.28 0.00 0.00 5534.93 3319.54 12195.39 00:08:05.577 { 00:08:05.577 "results": [ 00:08:05.577 { 00:08:05.577 "job": "Nvme0n1", 00:08:05.577 "core_mask": "0x2", 00:08:05.577 "workload": "randwrite", 00:08:05.577 "status": "finished", 00:08:05.577 "queue_depth": 128, 00:08:05.577 "io_size": 4096, 00:08:05.577 "runtime": 10.002944, 00:08:05.577 "iops": 23111.895857859447, 00:08:05.577 "mibps": 90.28084319476346, 00:08:05.577 "io_failed": 0, 00:08:05.577 "io_timeout": 0, 00:08:05.577 "avg_latency_us": 5534.9288970776715, 00:08:05.577 "min_latency_us": 3319.5408695652172, 00:08:05.577 "max_latency_us": 12195.394782608695 00:08:05.577 } 00:08:05.577 ], 00:08:05.577 "core_count": 1 00:08:05.577 } 00:08:05.577 07:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2306120 00:08:05.577 07:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2306120 ']' 00:08:05.577 07:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2306120 00:08:05.577 07:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:05.577 07:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.577 07:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2306120 00:08:05.904 07:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:05.904 07:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:05.904 07:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2306120' 00:08:05.904 killing process with pid 2306120 00:08:05.904 07:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2306120 00:08:05.904 Received shutdown signal, test time was about 10.000000 seconds 00:08:05.904 00:08:05.904 Latency(us) 00:08:05.904 [2024-11-27T06:51:00.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.904 [2024-11-27T06:51:00.013Z] =================================================================================================================== 00:08:05.904 [2024-11-27T06:51:00.013Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:05.904 07:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2306120 00:08:05.905 07:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:06.168 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:06.168 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2ca8fc1-bd50-4cfc-bac3-8c22933a1286 00:08:06.168 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:06.427 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:06.427 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:06.427 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2303019 00:08:06.427 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2303019 00:08:06.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2303019 Killed "${NVMF_APP[@]}" "$@" 00:08:06.427 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:06.427 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:06.427 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:06.427 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:06.427 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:06.427 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2308166 00:08:06.427 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:06.427 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2308166 00:08:06.427 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2308166 ']' 00:08:06.427 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.427 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.427 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.427 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.427 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:06.427 [2024-11-27 07:51:00.528653] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:08:06.427 [2024-11-27 07:51:00.528702] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.686 [2024-11-27 07:51:00.595493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.686 [2024-11-27 07:51:00.637289] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.686 [2024-11-27 07:51:00.637326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.686 [2024-11-27 07:51:00.637334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.686 [2024-11-27 07:51:00.637340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.686 [2024-11-27 07:51:00.637345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.686 [2024-11-27 07:51:00.637919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.686 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.686 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:06.686 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:06.686 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:06.686 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:06.686 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.686 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:06.945 [2024-11-27 07:51:00.949281] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:06.945 [2024-11-27 07:51:00.949360] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:06.945 [2024-11-27 07:51:00.949384] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:06.945 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:06.945 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5cd40bb9-6dca-4991-9278-9771e4cab8c3 00:08:06.945 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=5cd40bb9-6dca-4991-9278-9771e4cab8c3 00:08:06.945 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:06.945 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:06.945 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:06.945 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:06.945 07:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:07.204 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5cd40bb9-6dca-4991-9278-9771e4cab8c3 -t 2000 00:08:07.463 [ 00:08:07.463 { 00:08:07.463 "name": "5cd40bb9-6dca-4991-9278-9771e4cab8c3", 00:08:07.463 "aliases": [ 00:08:07.463 "lvs/lvol" 00:08:07.463 ], 00:08:07.463 "product_name": "Logical Volume", 00:08:07.463 "block_size": 4096, 00:08:07.463 "num_blocks": 38912, 00:08:07.463 "uuid": "5cd40bb9-6dca-4991-9278-9771e4cab8c3", 00:08:07.463 "assigned_rate_limits": { 00:08:07.463 "rw_ios_per_sec": 0, 00:08:07.463 "rw_mbytes_per_sec": 0, 00:08:07.463 "r_mbytes_per_sec": 0, 00:08:07.463 "w_mbytes_per_sec": 0 00:08:07.463 }, 00:08:07.463 "claimed": false, 00:08:07.463 "zoned": false, 00:08:07.463 "supported_io_types": { 00:08:07.463 "read": true, 00:08:07.463 "write": true, 00:08:07.463 "unmap": true, 00:08:07.463 "flush": false, 00:08:07.463 "reset": true, 00:08:07.463 "nvme_admin": false, 00:08:07.463 "nvme_io": false, 00:08:07.463 "nvme_io_md": false, 00:08:07.463 "write_zeroes": true, 00:08:07.463 "zcopy": false, 00:08:07.463 "get_zone_info": false, 00:08:07.463 "zone_management": false, 00:08:07.463 "zone_append": false, 00:08:07.463 "compare": false, 00:08:07.463 "compare_and_write": false, 00:08:07.463 "abort": false, 00:08:07.463 "seek_hole": true, 00:08:07.463 "seek_data": true, 00:08:07.463 "copy": false, 00:08:07.463 "nvme_iov_md": false 00:08:07.463 }, 00:08:07.463 "driver_specific": { 00:08:07.463 "lvol": { 00:08:07.463 "lvol_store_uuid": "a2ca8fc1-bd50-4cfc-bac3-8c22933a1286", 00:08:07.463 "base_bdev": "aio_bdev", 00:08:07.463 "thin_provision": false, 00:08:07.463 "num_allocated_clusters": 38, 00:08:07.463 "snapshot": false, 00:08:07.463 "clone": false, 00:08:07.463 "esnap_clone": false 00:08:07.463 } 00:08:07.463 } 00:08:07.463 } 00:08:07.463 ] 00:08:07.463 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:07.463 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:07.463 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2ca8fc1-bd50-4cfc-bac3-8c22933a1286 00:08:07.463 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:07.463 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2ca8fc1-bd50-4cfc-bac3-8c22933a1286 00:08:07.463 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:07.722 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:07.722 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:07.982 [2024-11-27 07:51:01.894358] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:07.982 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2ca8fc1-bd50-4cfc-bac3-8c22933a1286 00:08:07.982 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:07.982 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2ca8fc1-bd50-4cfc-bac3-8c22933a1286 00:08:07.982 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.982 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.982 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.982 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.982 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.982 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.982 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.982 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:07.982 07:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2ca8fc1-bd50-4cfc-bac3-8c22933a1286 00:08:08.240 request: 00:08:08.240 { 00:08:08.240 "uuid": "a2ca8fc1-bd50-4cfc-bac3-8c22933a1286", 00:08:08.240 "method": "bdev_lvol_get_lvstores", 00:08:08.240 "req_id": 1 00:08:08.240 } 00:08:08.240 Got JSON-RPC error response 00:08:08.240 response: 00:08:08.240 { 00:08:08.240 "code": -19, 00:08:08.240 "message": "No such device" 00:08:08.240 } 00:08:08.240 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:08.240 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:08.240 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:08.240 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:08.240 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:08.240 aio_bdev 00:08:08.240 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5cd40bb9-6dca-4991-9278-9771e4cab8c3 00:08:08.240 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=5cd40bb9-6dca-4991-9278-9771e4cab8c3 00:08:08.240 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:08.240 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:08.240 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:08.241 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:08.241 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:08.499 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5cd40bb9-6dca-4991-9278-9771e4cab8c3 -t 2000 00:08:08.758 [ 00:08:08.758 { 00:08:08.758 "name": "5cd40bb9-6dca-4991-9278-9771e4cab8c3", 00:08:08.758 "aliases": [ 00:08:08.758 "lvs/lvol" 00:08:08.758 ], 00:08:08.758 "product_name": "Logical Volume", 00:08:08.758 "block_size": 4096, 00:08:08.758 "num_blocks": 38912, 00:08:08.758 "uuid": "5cd40bb9-6dca-4991-9278-9771e4cab8c3", 00:08:08.758 "assigned_rate_limits": { 00:08:08.758 "rw_ios_per_sec": 0, 00:08:08.758 "rw_mbytes_per_sec": 0, 00:08:08.758 "r_mbytes_per_sec": 0, 00:08:08.758 "w_mbytes_per_sec": 0 00:08:08.758 }, 00:08:08.758 "claimed": false, 00:08:08.758 "zoned": false, 00:08:08.758 "supported_io_types": { 00:08:08.758 "read": true, 00:08:08.758 "write": true, 00:08:08.758 "unmap": true, 00:08:08.758 "flush": false, 00:08:08.758 "reset": true, 00:08:08.758 "nvme_admin": false, 00:08:08.758 "nvme_io": false, 00:08:08.758 "nvme_io_md": false, 00:08:08.758 "write_zeroes": true, 00:08:08.758 "zcopy": false, 00:08:08.758 "get_zone_info": false, 00:08:08.758 "zone_management": false, 00:08:08.758 "zone_append": false, 00:08:08.758 "compare": false, 00:08:08.758 "compare_and_write": false, 00:08:08.758 "abort": false, 00:08:08.758 "seek_hole": true, 00:08:08.758 "seek_data": true, 00:08:08.758 "copy": false, 00:08:08.758 "nvme_iov_md": false 00:08:08.758 }, 00:08:08.758 "driver_specific": { 00:08:08.758 "lvol": { 00:08:08.758 "lvol_store_uuid": "a2ca8fc1-bd50-4cfc-bac3-8c22933a1286", 00:08:08.758 "base_bdev": "aio_bdev", 00:08:08.758 "thin_provision": false, 00:08:08.758 "num_allocated_clusters": 38, 00:08:08.758 "snapshot": false, 00:08:08.758 "clone": false, 00:08:08.758 "esnap_clone": false 00:08:08.758 } 00:08:08.758 } 00:08:08.758 } 00:08:08.758 ] 00:08:08.758 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:08.758 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2ca8fc1-bd50-4cfc-bac3-8c22933a1286 00:08:08.758 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:08.758 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:08.758 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2ca8fc1-bd50-4cfc-bac3-8c22933a1286 00:08:08.758 07:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:09.016 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:09.016 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5cd40bb9-6dca-4991-9278-9771e4cab8c3 00:08:09.274 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a2ca8fc1-bd50-4cfc-bac3-8c22933a1286 00:08:09.532 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:09.532 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:09.791 00:08:09.791 real 0m16.873s 00:08:09.791 user 0m43.758s 00:08:09.791 sys 0m3.738s 00:08:09.791 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.791 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.791 ************************************ 00:08:09.791 END TEST lvs_grow_dirty 00:08:09.791 ************************************ 00:08:09.791 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:09.791 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:09.792 nvmf_trace.0 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:09.792 rmmod nvme_tcp 00:08:09.792 rmmod nvme_fabrics 00:08:09.792 rmmod nvme_keyring 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2308166 ']' 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2308166 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2308166 ']' 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2308166 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2308166 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2308166' 00:08:09.792 killing process with pid 2308166 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2308166 00:08:09.792 07:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2308166 00:08:10.052 07:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:10.052 07:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:10.052 07:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:10.052 07:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:10.052 07:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:10.052 07:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:10.052 07:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:10.052 07:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:10.052 07:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:10.052 07:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.052 07:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.052 07:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.590 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:12.590 00:08:12.590 real 0m40.832s 00:08:12.590 user 1m4.142s 00:08:12.590 sys 0m9.576s 00:08:12.590 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.590 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:12.590 ************************************ 00:08:12.590 END TEST nvmf_lvs_grow 00:08:12.590 ************************************ 00:08:12.590 07:51:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:12.590 07:51:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:12.590 07:51:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.590 07:51:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:12.590 ************************************ 00:08:12.590 START TEST nvmf_bdev_io_wait 00:08:12.590 ************************************ 00:08:12.590 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:12.590 * Looking for test storage... 00:08:12.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.590 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:12.590 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:12.590 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:12.590 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:12.590 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.590 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.590 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.590 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:12.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.591 --rc genhtml_branch_coverage=1 00:08:12.591 --rc genhtml_function_coverage=1 00:08:12.591 --rc genhtml_legend=1 00:08:12.591 --rc geninfo_all_blocks=1 00:08:12.591 --rc geninfo_unexecuted_blocks=1 00:08:12.591 00:08:12.591 ' 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:12.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.591 --rc genhtml_branch_coverage=1 00:08:12.591 --rc genhtml_function_coverage=1 00:08:12.591 --rc genhtml_legend=1 00:08:12.591 --rc geninfo_all_blocks=1 00:08:12.591 --rc geninfo_unexecuted_blocks=1 00:08:12.591 00:08:12.591 ' 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:12.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.591 --rc genhtml_branch_coverage=1 00:08:12.591 --rc genhtml_function_coverage=1 00:08:12.591 --rc genhtml_legend=1 00:08:12.591 --rc geninfo_all_blocks=1 00:08:12.591 --rc geninfo_unexecuted_blocks=1 00:08:12.591 00:08:12.591 ' 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:12.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.591 --rc genhtml_branch_coverage=1 00:08:12.591 --rc genhtml_function_coverage=1 00:08:12.591 --rc genhtml_legend=1 00:08:12.591 --rc geninfo_all_blocks=1 00:08:12.591 --rc geninfo_unexecuted_blocks=1 00:08:12.591 00:08:12.591 ' 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:12.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.591 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.592 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.592 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:12.592 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:12.592 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:12.592 07:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:17.868 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:17.868 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:17.868 Found net devices under 0000:86:00.0: cvl_0_0 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:17.868 Found net devices under 0000:86:00.1: cvl_0_1 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:17.868 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:17.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:08:17.869 00:08:17.869 --- 10.0.0.2 ping statistics --- 00:08:17.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.869 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:08:17.869 00:08:17.869 --- 10.0.0.1 ping statistics --- 00:08:17.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.869 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2312731 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2312731 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2312731 ']' 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.869 [2024-11-27 07:51:11.685807] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:08:17.869 [2024-11-27 07:51:11.685851] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.869 [2024-11-27 07:51:11.751914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.869 [2024-11-27 07:51:11.796124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.869 [2024-11-27 07:51:11.796165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.869 [2024-11-27 07:51:11.796172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.869 [2024-11-27 07:51:11.796178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.869 [2024-11-27 07:51:11.796183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.869 [2024-11-27 07:51:11.797736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.869 [2024-11-27 07:51:11.797830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.869 [2024-11-27 07:51:11.797919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.869 [2024-11-27 07:51:11.797920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.869 [2024-11-27 07:51:11.941533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.869 Malloc0 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.869 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.129 [2024-11-27 07:51:11.988833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2312804 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2312806 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:18.129 { 00:08:18.129 "params": { 00:08:18.129 "name": "Nvme$subsystem", 00:08:18.129 "trtype": "$TEST_TRANSPORT", 00:08:18.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.129 "adrfam": "ipv4", 00:08:18.129 "trsvcid": "$NVMF_PORT", 00:08:18.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.129 "hdgst": ${hdgst:-false}, 00:08:18.129 "ddgst": ${ddgst:-false} 00:08:18.129 }, 00:08:18.129 "method": "bdev_nvme_attach_controller" 00:08:18.129 } 00:08:18.129 EOF 00:08:18.129 )") 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2312808 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:18.129 { 00:08:18.129 "params": { 00:08:18.129 "name": "Nvme$subsystem", 00:08:18.129 "trtype": "$TEST_TRANSPORT", 00:08:18.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.129 "adrfam": "ipv4", 00:08:18.129 "trsvcid": "$NVMF_PORT", 00:08:18.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.129 "hdgst": ${hdgst:-false}, 00:08:18.129 "ddgst": ${ddgst:-false} 00:08:18.129 }, 00:08:18.129 "method": "bdev_nvme_attach_controller" 00:08:18.129 } 00:08:18.129 EOF 00:08:18.129 )") 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2312811 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:18.129 { 00:08:18.129 "params": { 00:08:18.129 "name": "Nvme$subsystem", 00:08:18.129 "trtype": "$TEST_TRANSPORT", 00:08:18.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.129 "adrfam": "ipv4", 00:08:18.129 "trsvcid": "$NVMF_PORT", 00:08:18.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.129 "hdgst": ${hdgst:-false}, 00:08:18.129 "ddgst": ${ddgst:-false} 00:08:18.129 }, 00:08:18.129 "method": "bdev_nvme_attach_controller" 00:08:18.129 } 00:08:18.129 EOF 00:08:18.129 )") 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:18.129 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:18.129 07:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:18.129 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:18.129 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:18.129 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:18.129 { 00:08:18.129 "params": { 00:08:18.129 "name": "Nvme$subsystem", 00:08:18.129 "trtype": "$TEST_TRANSPORT", 00:08:18.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.130 "adrfam": "ipv4", 00:08:18.130 "trsvcid": "$NVMF_PORT", 00:08:18.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.130 "hdgst": ${hdgst:-false}, 00:08:18.130 "ddgst": ${ddgst:-false} 00:08:18.130 }, 00:08:18.130 "method": "bdev_nvme_attach_controller" 00:08:18.130 } 00:08:18.130 EOF 00:08:18.130 )") 00:08:18.130 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:18.130 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2312804 00:08:18.130 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:18.130 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:18.130 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:18.130 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:18.130 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:18.130 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:18.130 "params": { 00:08:18.130 "name": "Nvme1", 00:08:18.130 "trtype": "tcp", 00:08:18.130 "traddr": "10.0.0.2", 00:08:18.130 "adrfam": "ipv4", 00:08:18.130 "trsvcid": "4420", 00:08:18.130 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.130 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:18.130 "hdgst": false, 00:08:18.130 "ddgst": false 00:08:18.130 }, 00:08:18.130 "method": "bdev_nvme_attach_controller" 00:08:18.130 }' 00:08:18.130 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:18.130 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:18.130 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:18.130 "params": { 00:08:18.130 "name": "Nvme1", 00:08:18.130 "trtype": "tcp", 00:08:18.130 "traddr": "10.0.0.2", 00:08:18.130 "adrfam": "ipv4", 00:08:18.130 "trsvcid": "4420", 00:08:18.130 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.130 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:18.130 "hdgst": false, 00:08:18.130 "ddgst": false 00:08:18.130 }, 00:08:18.130 "method": "bdev_nvme_attach_controller" 00:08:18.130 }' 00:08:18.130 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:18.130 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:18.130 "params": { 00:08:18.130 "name": "Nvme1", 00:08:18.130 "trtype": "tcp", 00:08:18.130 "traddr": "10.0.0.2", 00:08:18.130 "adrfam": "ipv4", 00:08:18.130 "trsvcid": "4420", 00:08:18.130 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.130 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:18.130 "hdgst": false, 00:08:18.130 "ddgst": false 00:08:18.130 }, 00:08:18.130 "method": "bdev_nvme_attach_controller" 00:08:18.130 }' 00:08:18.130 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:18.130 07:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:18.130 "params": { 00:08:18.130 "name": "Nvme1", 00:08:18.130 "trtype": "tcp", 00:08:18.130 "traddr": "10.0.0.2", 00:08:18.130 "adrfam": "ipv4", 00:08:18.130 "trsvcid": "4420", 00:08:18.130 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.130 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:18.130 "hdgst": false, 00:08:18.130 "ddgst": false 00:08:18.130 }, 00:08:18.130 "method": "bdev_nvme_attach_controller" 00:08:18.130 }' 00:08:18.130 [2024-11-27 07:51:12.039576] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:08:18.130 [2024-11-27 07:51:12.039625] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:18.130 [2024-11-27 07:51:12.043452] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:08:18.130 [2024-11-27 07:51:12.043504] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:18.130 [2024-11-27 07:51:12.043755] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:08:18.130 [2024-11-27 07:51:12.043794] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:18.130 [2024-11-27 07:51:12.044777] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:08:18.130 [2024-11-27 07:51:12.044817] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:18.130 [2024-11-27 07:51:12.222851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.389 [2024-11-27 07:51:12.265886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:18.389 [2024-11-27 07:51:12.313663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.389 [2024-11-27 07:51:12.365280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.389 [2024-11-27 07:51:12.374747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:18.389 [2024-11-27 07:51:12.408147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:18.389 [2024-11-27 07:51:12.440101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.389 [2024-11-27 07:51:12.482910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:18.647 Running I/O for 1 seconds... 00:08:18.647 Running I/O for 1 seconds... 00:08:18.647 Running I/O for 1 seconds... 00:08:18.647 Running I/O for 1 seconds... 00:08:19.581 11399.00 IOPS, 44.53 MiB/s 00:08:19.581 Latency(us) 00:08:19.581 [2024-11-27T06:51:13.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.581 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:19.581 Nvme1n1 : 1.01 11458.85 44.76 0.00 0.00 11129.89 5584.81 16982.37 00:08:19.581 [2024-11-27T06:51:13.690Z] =================================================================================================================== 00:08:19.581 [2024-11-27T06:51:13.690Z] Total : 11458.85 44.76 0.00 0.00 11129.89 5584.81 16982.37 00:08:19.581 9540.00 IOPS, 37.27 MiB/s 00:08:19.581 Latency(us) 00:08:19.581 [2024-11-27T06:51:13.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.581 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:19.581 Nvme1n1 : 1.01 9598.61 37.49 0.00 0.00 13282.42 6268.66 21883.33 00:08:19.581 [2024-11-27T06:51:13.690Z] =================================================================================================================== 00:08:19.581 [2024-11-27T06:51:13.690Z] Total : 9598.61 37.49 0.00 0.00 13282.42 6268.66 21883.33 00:08:19.581 11400.00 IOPS, 44.53 MiB/s 00:08:19.581 Latency(us) 00:08:19.581 [2024-11-27T06:51:13.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.581 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:19.581 Nvme1n1 : 1.00 11484.62 44.86 0.00 0.00 11119.43 3105.84 22681.15 00:08:19.581 [2024-11-27T06:51:13.690Z] =================================================================================================================== 00:08:19.581 [2024-11-27T06:51:13.690Z] Total : 11484.62 44.86 0.00 0.00 11119.43 3105.84 22681.15 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2312806 00:08:19.841 238016.00 IOPS, 929.75 MiB/s 00:08:19.841 Latency(us) 00:08:19.841 [2024-11-27T06:51:13.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.841 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:19.841 Nvme1n1 : 1.00 237643.30 928.29 0.00 0.00 535.60 233.29 1538.67 00:08:19.841 [2024-11-27T06:51:13.950Z] =================================================================================================================== 00:08:19.841 [2024-11-27T06:51:13.950Z] Total : 237643.30 928.29 0.00 0.00 535.60 233.29 1538.67 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2312808 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2312811 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:19.841 rmmod nvme_tcp 00:08:19.841 rmmod nvme_fabrics 00:08:19.841 rmmod nvme_keyring 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2312731 ']' 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2312731 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2312731 ']' 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2312731 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.841 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2312731 00:08:20.101 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.101 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.101 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2312731' 00:08:20.101 killing process with pid 2312731 00:08:20.101 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2312731 00:08:20.101 07:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2312731 00:08:20.101 07:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:20.101 07:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:20.101 07:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:20.101 07:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:20.101 07:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:20.101 07:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:20.101 07:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:20.101 07:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:20.101 07:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:20.101 07:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.101 07:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.101 07:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:22.637 00:08:22.637 real 0m10.016s 00:08:22.637 user 0m15.836s 00:08:22.637 sys 0m5.567s 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.637 ************************************ 00:08:22.637 END TEST nvmf_bdev_io_wait 00:08:22.637 ************************************ 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:22.637 ************************************ 00:08:22.637 START TEST nvmf_queue_depth 00:08:22.637 ************************************ 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:22.637 * Looking for test storage... 00:08:22.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:22.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.637 --rc genhtml_branch_coverage=1 00:08:22.637 --rc genhtml_function_coverage=1 00:08:22.637 --rc genhtml_legend=1 00:08:22.637 --rc geninfo_all_blocks=1 00:08:22.637 --rc geninfo_unexecuted_blocks=1 00:08:22.637 00:08:22.637 ' 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:22.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.637 --rc genhtml_branch_coverage=1 00:08:22.637 --rc genhtml_function_coverage=1 00:08:22.637 --rc genhtml_legend=1 00:08:22.637 --rc geninfo_all_blocks=1 00:08:22.637 --rc geninfo_unexecuted_blocks=1 00:08:22.637 00:08:22.637 ' 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:22.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.637 --rc genhtml_branch_coverage=1 00:08:22.637 --rc genhtml_function_coverage=1 00:08:22.637 --rc genhtml_legend=1 00:08:22.637 --rc geninfo_all_blocks=1 00:08:22.637 --rc geninfo_unexecuted_blocks=1 00:08:22.637 00:08:22.637 ' 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:22.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.637 --rc genhtml_branch_coverage=1 00:08:22.637 --rc genhtml_function_coverage=1 00:08:22.637 --rc genhtml_legend=1 00:08:22.637 --rc geninfo_all_blocks=1 00:08:22.637 --rc geninfo_unexecuted_blocks=1 00:08:22.637 00:08:22.637 ' 00:08:22.637 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:22.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:22.638 07:51:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:27.913 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:27.913 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.913 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:27.914 Found net devices under 0000:86:00.0: cvl_0_0 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:27.914 Found net devices under 0000:86:00.1: cvl_0_1 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:27.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:08:27.914 00:08:27.914 --- 10.0.0.2 ping statistics --- 00:08:27.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.914 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:08:27.914 00:08:27.914 --- 10.0.0.1 ping statistics --- 00:08:27.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.914 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2316597 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2316597 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2316597 ']' 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.914 07:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:27.914 [2024-11-27 07:51:21.927210] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:08:27.914 [2024-11-27 07:51:21.927256] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.914 [2024-11-27 07:51:21.994921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.175 [2024-11-27 07:51:22.037023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.175 [2024-11-27 07:51:22.037057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.175 [2024-11-27 07:51:22.037064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.175 [2024-11-27 07:51:22.037071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.175 [2024-11-27 07:51:22.037076] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.175 [2024-11-27 07:51:22.037616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.175 [2024-11-27 07:51:22.170450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.175 Malloc0 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.175 [2024-11-27 07:51:22.212811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2316616 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2316616 /var/tmp/bdevperf.sock 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2316616 ']' 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:28.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.175 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.175 [2024-11-27 07:51:22.264527] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:08:28.175 [2024-11-27 07:51:22.264573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2316616 ] 00:08:28.434 [2024-11-27 07:51:22.326699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.434 [2024-11-27 07:51:22.370878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.434 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.435 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:28.435 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:28.435 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.435 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.694 NVMe0n1 00:08:28.694 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.694 07:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:28.694 Running I/O for 10 seconds... 00:08:31.004 11264.00 IOPS, 44.00 MiB/s [2024-11-27T06:51:26.047Z] 11769.50 IOPS, 45.97 MiB/s [2024-11-27T06:51:26.980Z] 11755.67 IOPS, 45.92 MiB/s [2024-11-27T06:51:27.913Z] 11782.00 IOPS, 46.02 MiB/s [2024-11-27T06:51:28.852Z] 11878.20 IOPS, 46.40 MiB/s [2024-11-27T06:51:29.788Z] 11949.00 IOPS, 46.68 MiB/s [2024-11-27T06:51:31.166Z] 11988.57 IOPS, 46.83 MiB/s [2024-11-27T06:51:32.103Z] 12003.50 IOPS, 46.89 MiB/s [2024-11-27T06:51:33.041Z] 12023.11 IOPS, 46.97 MiB/s [2024-11-27T06:51:33.041Z] 11989.00 IOPS, 46.83 MiB/s 00:08:38.932 Latency(us) 00:08:38.932 [2024-11-27T06:51:33.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.932 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:38.932 Verification LBA range: start 0x0 length 0x4000 00:08:38.932 NVMe0n1 : 10.05 12029.49 46.99 0.00 0.00 84832.31 9175.04 56303.97 00:08:38.932 [2024-11-27T06:51:33.041Z] =================================================================================================================== 00:08:38.932 [2024-11-27T06:51:33.041Z] Total : 12029.49 46.99 0.00 0.00 84832.31 9175.04 56303.97 00:08:38.932 { 00:08:38.932 "results": [ 00:08:38.932 { 00:08:38.933 "job": "NVMe0n1", 00:08:38.933 "core_mask": "0x1", 00:08:38.933 "workload": "verify", 00:08:38.933 "status": "finished", 00:08:38.933 "verify_range": { 00:08:38.933 "start": 0, 00:08:38.933 "length": 16384 00:08:38.933 }, 00:08:38.933 "queue_depth": 1024, 00:08:38.933 "io_size": 4096, 00:08:38.933 "runtime": 10.047562, 00:08:38.933 "iops": 12029.485361722574, 00:08:38.933 "mibps": 46.99017719422881, 00:08:38.933 "io_failed": 0, 00:08:38.933 "io_timeout": 0, 00:08:38.933 "avg_latency_us": 84832.30790685126, 00:08:38.933 "min_latency_us": 9175.04, 00:08:38.933 "max_latency_us": 56303.97217391304 00:08:38.933 } 00:08:38.933 ], 00:08:38.933 "core_count": 1 00:08:38.933 } 00:08:38.933 07:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2316616 00:08:38.933 07:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2316616 ']' 00:08:38.933 07:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2316616 00:08:38.933 07:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:38.933 07:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.933 07:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2316616 00:08:38.933 07:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:38.933 07:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:38.933 07:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2316616' 00:08:38.933 killing process with pid 2316616 00:08:38.933 07:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2316616 00:08:38.933 Received shutdown signal, test time was about 10.000000 seconds 00:08:38.933 00:08:38.933 Latency(us) 00:08:38.933 [2024-11-27T06:51:33.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.933 [2024-11-27T06:51:33.042Z] =================================================================================================================== 00:08:38.933 [2024-11-27T06:51:33.042Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:38.933 07:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2316616 00:08:38.933 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:38.933 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:38.933 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:38.933 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:38.933 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:38.933 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:38.933 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:38.933 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:38.933 rmmod nvme_tcp 00:08:39.192 rmmod nvme_fabrics 00:08:39.192 rmmod nvme_keyring 00:08:39.192 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.192 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:39.192 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:39.192 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2316597 ']' 00:08:39.192 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2316597 00:08:39.192 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2316597 ']' 00:08:39.192 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2316597 00:08:39.192 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:39.192 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.192 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2316597 00:08:39.192 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:39.192 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:39.192 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2316597' 00:08:39.192 killing process with pid 2316597 00:08:39.192 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2316597 00:08:39.192 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2316597 00:08:39.452 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:39.452 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:39.452 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:39.452 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:39.452 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:39.452 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:39.452 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:39.452 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:39.452 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:39.452 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.452 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.452 07:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.357 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:41.357 00:08:41.357 real 0m19.135s 00:08:41.357 user 0m22.817s 00:08:41.357 sys 0m5.692s 00:08:41.357 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.357 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:41.357 ************************************ 00:08:41.357 END TEST nvmf_queue_depth 00:08:41.357 ************************************ 00:08:41.358 07:51:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:41.358 07:51:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:41.358 07:51:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.358 07:51:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.358 ************************************ 00:08:41.358 START TEST nvmf_target_multipath 00:08:41.358 ************************************ 00:08:41.358 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:41.617 * Looking for test storage... 00:08:41.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.617 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:41.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.618 --rc genhtml_branch_coverage=1 00:08:41.618 --rc genhtml_function_coverage=1 00:08:41.618 --rc genhtml_legend=1 00:08:41.618 --rc geninfo_all_blocks=1 00:08:41.618 --rc geninfo_unexecuted_blocks=1 00:08:41.618 00:08:41.618 ' 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:41.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.618 --rc genhtml_branch_coverage=1 00:08:41.618 --rc genhtml_function_coverage=1 00:08:41.618 --rc genhtml_legend=1 00:08:41.618 --rc geninfo_all_blocks=1 00:08:41.618 --rc geninfo_unexecuted_blocks=1 00:08:41.618 00:08:41.618 ' 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:41.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.618 --rc genhtml_branch_coverage=1 00:08:41.618 --rc genhtml_function_coverage=1 00:08:41.618 --rc genhtml_legend=1 00:08:41.618 --rc geninfo_all_blocks=1 00:08:41.618 --rc geninfo_unexecuted_blocks=1 00:08:41.618 00:08:41.618 ' 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:41.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.618 --rc genhtml_branch_coverage=1 00:08:41.618 --rc genhtml_function_coverage=1 00:08:41.618 --rc genhtml_legend=1 00:08:41.618 --rc geninfo_all_blocks=1 00:08:41.618 --rc geninfo_unexecuted_blocks=1 00:08:41.618 00:08:41.618 ' 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:41.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:41.618 07:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:48.190 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:48.191 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:48.191 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:48.191 Found net devices under 0000:86:00.0: cvl_0_0 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:48.191 Found net devices under 0000:86:00.1: cvl_0_1 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:48.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:08:48.191 00:08:48.191 --- 10.0.0.2 ping statistics --- 00:08:48.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.191 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:08:48.191 00:08:48.191 --- 10.0.0.1 ping statistics --- 00:08:48.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.191 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:48.191 only one NIC for nvmf test 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.191 rmmod nvme_tcp 00:08:48.191 rmmod nvme_fabrics 00:08:48.191 rmmod nvme_keyring 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:48.191 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:48.192 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.192 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:48.192 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.192 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.192 07:51:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:49.571 00:08:49.571 real 0m8.120s 00:08:49.571 user 0m1.747s 00:08:49.571 sys 0m4.386s 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:49.571 ************************************ 00:08:49.571 END TEST nvmf_target_multipath 00:08:49.571 ************************************ 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.571 ************************************ 00:08:49.571 START TEST nvmf_zcopy 00:08:49.571 ************************************ 00:08:49.571 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:49.832 * Looking for test storage... 00:08:49.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:49.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.832 --rc genhtml_branch_coverage=1 00:08:49.832 --rc genhtml_function_coverage=1 00:08:49.832 --rc genhtml_legend=1 00:08:49.832 --rc geninfo_all_blocks=1 00:08:49.832 --rc geninfo_unexecuted_blocks=1 00:08:49.832 00:08:49.832 ' 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:49.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.832 --rc genhtml_branch_coverage=1 00:08:49.832 --rc genhtml_function_coverage=1 00:08:49.832 --rc genhtml_legend=1 00:08:49.832 --rc geninfo_all_blocks=1 00:08:49.832 --rc geninfo_unexecuted_blocks=1 00:08:49.832 00:08:49.832 ' 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:49.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.832 --rc genhtml_branch_coverage=1 00:08:49.832 --rc genhtml_function_coverage=1 00:08:49.832 --rc genhtml_legend=1 00:08:49.832 --rc geninfo_all_blocks=1 00:08:49.832 --rc geninfo_unexecuted_blocks=1 00:08:49.832 00:08:49.832 ' 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:49.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.832 --rc genhtml_branch_coverage=1 00:08:49.832 --rc genhtml_function_coverage=1 00:08:49.832 --rc genhtml_legend=1 00:08:49.832 --rc geninfo_all_blocks=1 00:08:49.832 --rc geninfo_unexecuted_blocks=1 00:08:49.832 00:08:49.832 ' 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.832 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:49.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:49.833 07:51:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:55.108 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:55.108 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:55.108 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:55.109 Found net devices under 0000:86:00.0: cvl_0_0 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:55.109 Found net devices under 0000:86:00.1: cvl_0_1 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:55.109 07:51:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:55.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:08:55.109 00:08:55.109 --- 10.0.0.2 ping statistics --- 00:08:55.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.109 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:55.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:08:55.109 00:08:55.109 --- 10.0.0.1 ping statistics --- 00:08:55.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.109 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2325504 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2325504 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2325504 ']' 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.109 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.109 [2024-11-27 07:51:49.110094] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:08:55.109 [2024-11-27 07:51:49.110139] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.109 [2024-11-27 07:51:49.176193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.368 [2024-11-27 07:51:49.218060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.368 [2024-11-27 07:51:49.218094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.368 [2024-11-27 07:51:49.218101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.368 [2024-11-27 07:51:49.218107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.368 [2024-11-27 07:51:49.218112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.368 [2024-11-27 07:51:49.218697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.368 [2024-11-27 07:51:49.355373] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.368 [2024-11-27 07:51:49.371542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.368 malloc0 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.368 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:55.369 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:55.369 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:55.369 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:55.369 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:55.369 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:55.369 { 00:08:55.369 "params": { 00:08:55.369 "name": "Nvme$subsystem", 00:08:55.369 "trtype": "$TEST_TRANSPORT", 00:08:55.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.369 "adrfam": "ipv4", 00:08:55.369 "trsvcid": "$NVMF_PORT", 00:08:55.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.369 "hdgst": ${hdgst:-false}, 00:08:55.369 "ddgst": ${ddgst:-false} 00:08:55.369 }, 00:08:55.369 "method": "bdev_nvme_attach_controller" 00:08:55.369 } 00:08:55.369 EOF 00:08:55.369 )") 00:08:55.369 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:55.369 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:55.369 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:55.369 07:51:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:55.369 "params": { 00:08:55.369 "name": "Nvme1", 00:08:55.369 "trtype": "tcp", 00:08:55.369 "traddr": "10.0.0.2", 00:08:55.369 "adrfam": "ipv4", 00:08:55.369 "trsvcid": "4420", 00:08:55.369 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.369 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.369 "hdgst": false, 00:08:55.369 "ddgst": false 00:08:55.369 }, 00:08:55.369 "method": "bdev_nvme_attach_controller" 00:08:55.369 }' 00:08:55.369 [2024-11-27 07:51:49.449526] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:08:55.369 [2024-11-27 07:51:49.449566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2325530 ] 00:08:55.627 [2024-11-27 07:51:49.512256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.627 [2024-11-27 07:51:49.553310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.885 Running I/O for 10 seconds... 00:08:57.760 8259.00 IOPS, 64.52 MiB/s [2024-11-27T06:51:52.807Z] 8390.00 IOPS, 65.55 MiB/s [2024-11-27T06:51:54.184Z] 8442.67 IOPS, 65.96 MiB/s [2024-11-27T06:51:55.216Z] 8467.25 IOPS, 66.15 MiB/s [2024-11-27T06:51:55.816Z] 8470.00 IOPS, 66.17 MiB/s [2024-11-27T06:51:57.194Z] 8482.50 IOPS, 66.27 MiB/s [2024-11-27T06:51:58.130Z] 8491.71 IOPS, 66.34 MiB/s [2024-11-27T06:51:59.066Z] 8499.25 IOPS, 66.40 MiB/s [2024-11-27T06:52:00.004Z] 8504.89 IOPS, 66.44 MiB/s [2024-11-27T06:52:00.004Z] 8512.80 IOPS, 66.51 MiB/s 00:09:05.895 Latency(us) 00:09:05.895 [2024-11-27T06:52:00.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.895 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:05.895 Verification LBA range: start 0x0 length 0x1000 00:09:05.895 Nvme1n1 : 10.01 8515.27 66.53 0.00 0.00 14989.26 2293.76 22681.15 00:09:05.895 [2024-11-27T06:52:00.004Z] =================================================================================================================== 00:09:05.895 [2024-11-27T06:52:00.004Z] Total : 8515.27 66.53 0.00 0.00 14989.26 2293.76 22681.15 00:09:05.895 07:51:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2327271 00:09:05.895 07:51:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:05.895 07:51:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.895 07:51:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:05.895 07:51:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:05.895 07:51:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:05.895 07:51:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:05.895 07:51:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:05.895 07:51:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:05.895 { 00:09:05.896 "params": { 00:09:05.896 "name": "Nvme$subsystem", 00:09:05.896 "trtype": "$TEST_TRANSPORT", 00:09:05.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:05.896 "adrfam": "ipv4", 00:09:05.896 "trsvcid": "$NVMF_PORT", 00:09:05.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:05.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:05.896 "hdgst": ${hdgst:-false}, 00:09:05.896 "ddgst": ${ddgst:-false} 00:09:05.896 }, 00:09:05.896 "method": "bdev_nvme_attach_controller" 00:09:05.896 } 00:09:05.896 EOF 00:09:05.896 )") 00:09:05.896 07:51:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:05.896 [2024-11-27 07:51:59.995498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.896 [2024-11-27 07:51:59.995532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.896 07:51:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:06.155 07:52:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:06.155 [2024-11-27 07:52:00.003502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.155 [2024-11-27 07:52:00.003522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.155 07:52:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:06.155 "params": { 00:09:06.155 "name": "Nvme1", 00:09:06.155 "trtype": "tcp", 00:09:06.155 "traddr": "10.0.0.2", 00:09:06.155 "adrfam": "ipv4", 00:09:06.155 "trsvcid": "4420", 00:09:06.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.155 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:06.155 "hdgst": false, 00:09:06.155 "ddgst": false 00:09:06.155 }, 00:09:06.155 "method": "bdev_nvme_attach_controller" 00:09:06.155 }' 00:09:06.155 [2024-11-27 07:52:00.011513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.155 [2024-11-27 07:52:00.011528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.155 [2024-11-27 07:52:00.019536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.155 [2024-11-27 07:52:00.019548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.155 [2024-11-27 07:52:00.023826] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:09:06.155 [2024-11-27 07:52:00.023868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2327271 ] 00:09:06.155 [2024-11-27 07:52:00.027551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.155 [2024-11-27 07:52:00.027562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.155 [2024-11-27 07:52:00.035572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.155 [2024-11-27 07:52:00.035585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.155 [2024-11-27 07:52:00.043595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.155 [2024-11-27 07:52:00.043606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.155 [2024-11-27 07:52:00.051622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.155 [2024-11-27 07:52:00.051634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.155 [2024-11-27 07:52:00.059639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.155 [2024-11-27 07:52:00.059651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.155 [2024-11-27 07:52:00.067661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.155 [2024-11-27 07:52:00.067672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.155 [2024-11-27 07:52:00.075682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.155 [2024-11-27 07:52:00.075692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.155 [2024-11-27 07:52:00.083703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.083713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.087658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.156 [2024-11-27 07:52:00.091724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.091737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.099748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.099768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.107766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.107777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.115788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.115799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.123807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.123818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.130218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.156 [2024-11-27 07:52:00.131826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.131838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.139860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.139878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.147882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.147899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.155896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.155912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.163917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.163932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.171938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.171955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.179960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.179971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.187986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.187999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.196006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.196019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.204022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.204033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.212041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.212052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.220065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.220075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.228114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.228134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.236117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.236131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.244137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.244150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.252157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.252170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.156 [2024-11-27 07:52:00.260185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.156 [2024-11-27 07:52:00.260200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.268205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.268220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.276224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.276235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.284245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.284255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.292272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.292286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.300296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.300312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.308318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.308330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.316339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.316349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.324359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.324369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.332381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.332390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.340402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.340412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.348429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.348443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.356448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.356458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.364471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.364481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.372493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.372503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.380516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.380526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.388539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.388550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.396563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.396575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.404581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.404591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.412605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.412615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.420627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.416 [2024-11-27 07:52:00.420636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.416 [2024-11-27 07:52:00.428649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.417 [2024-11-27 07:52:00.428659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.417 [2024-11-27 07:52:00.436674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.417 [2024-11-27 07:52:00.436685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.417 [2024-11-27 07:52:00.444701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.417 [2024-11-27 07:52:00.444720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.417 [2024-11-27 07:52:00.452720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.417 [2024-11-27 07:52:00.452732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.417 Running I/O for 5 seconds... 00:09:06.417 [2024-11-27 07:52:00.460739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.417 [2024-11-27 07:52:00.460749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.417 [2024-11-27 07:52:00.472764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.417 [2024-11-27 07:52:00.472785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.417 [2024-11-27 07:52:00.480565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.417 [2024-11-27 07:52:00.480585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.417 [2024-11-27 07:52:00.490232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.417 [2024-11-27 07:52:00.490252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.417 [2024-11-27 07:52:00.499122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.417 [2024-11-27 07:52:00.499145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.417 [2024-11-27 07:52:00.507897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.417 [2024-11-27 07:52:00.507918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.417 [2024-11-27 07:52:00.515089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.417 [2024-11-27 07:52:00.515108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.676 [2024-11-27 07:52:00.525308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.676 [2024-11-27 07:52:00.525330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.676 [2024-11-27 07:52:00.534712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.676 [2024-11-27 07:52:00.534733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.676 [2024-11-27 07:52:00.543456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.676 [2024-11-27 07:52:00.543475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.676 [2024-11-27 07:52:00.552135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.676 [2024-11-27 07:52:00.552154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.676 [2024-11-27 07:52:00.561753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.676 [2024-11-27 07:52:00.561772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.676 [2024-11-27 07:52:00.570516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.676 [2024-11-27 07:52:00.570537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.676 [2024-11-27 07:52:00.579992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.676 [2024-11-27 07:52:00.580011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.676 [2024-11-27 07:52:00.589420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.676 [2024-11-27 07:52:00.589440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.676 [2024-11-27 07:52:00.598008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.676 [2024-11-27 07:52:00.598027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.676 [2024-11-27 07:52:00.607506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.676 [2024-11-27 07:52:00.607526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.676 [2024-11-27 07:52:00.617005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.676 [2024-11-27 07:52:00.617024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.676 [2024-11-27 07:52:00.625801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.677 [2024-11-27 07:52:00.625820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.677 [2024-11-27 07:52:00.635268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.677 [2024-11-27 07:52:00.635287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.677 [2024-11-27 07:52:00.642151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.677 [2024-11-27 07:52:00.642169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.677 [2024-11-27 07:52:00.652983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.677 [2024-11-27 07:52:00.653001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.677 [2024-11-27 07:52:00.662014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.677 [2024-11-27 07:52:00.662032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.677 [2024-11-27 07:52:00.670688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.677 [2024-11-27 07:52:00.670710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.677 [2024-11-27 07:52:00.680030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.677 [2024-11-27 07:52:00.680049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.677 [2024-11-27 07:52:00.688611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.677 [2024-11-27 07:52:00.688630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.677 [2024-11-27 07:52:00.697768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.677 [2024-11-27 07:52:00.697787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.677 [2024-11-27 07:52:00.706966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.677 [2024-11-27 07:52:00.706985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.677 [2024-11-27 07:52:00.716199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.677 [2024-11-27 07:52:00.716218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.677 [2024-11-27 07:52:00.725550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.677 [2024-11-27 07:52:00.725568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.677 [2024-11-27 07:52:00.734166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.677 [2024-11-27 07:52:00.734185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.677 [2024-11-27 07:52:00.742765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.677 [2024-11-27 07:52:00.742784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.677 [2024-11-27 07:52:00.752339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.677 [2024-11-27 07:52:00.752358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.677 [2024-11-27 07:52:00.761633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.677 [2024-11-27 07:52:00.761652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.677 [2024-11-27 07:52:00.770841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.677 [2024-11-27 07:52:00.770860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.677 [2024-11-27 07:52:00.780372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.677 [2024-11-27 07:52:00.780391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.936 [2024-11-27 07:52:00.789611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.936 [2024-11-27 07:52:00.789631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.936 [2024-11-27 07:52:00.798468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.798487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.807070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.807089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.815713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.815731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.825182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.825200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.834320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.834339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.843478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.843500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.852552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.852570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.861781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.861799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.870443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.870461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.879113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.879131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.888922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.888941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.897482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.897501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.906844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.906862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.916326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.916344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.924932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.924956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.934168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.934187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.941008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.941027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.952056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.952075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.961118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.961137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.970341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.970359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.979150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.979170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.988513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.988531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:00.997736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:00.997754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:01.006473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:01.006492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:01.015173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:01.015198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:01.024068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:01.024086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:01.033238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:01.033256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.937 [2024-11-27 07:52:01.042026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.937 [2024-11-27 07:52:01.042046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.195 [2024-11-27 07:52:01.051330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.195 [2024-11-27 07:52:01.051349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.195 [2024-11-27 07:52:01.060655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.195 [2024-11-27 07:52:01.060674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.195 [2024-11-27 07:52:01.069317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.195 [2024-11-27 07:52:01.069335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.195 [2024-11-27 07:52:01.078613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.195 [2024-11-27 07:52:01.078631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.087273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.087291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.096232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.096250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.105717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.105735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.115220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.115238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.123915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.123933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.130783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.130802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.141693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.141712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.149332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.149351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.159445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.159465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.168380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.168398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.177089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.177109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.187031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.187050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.195673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.195692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.204872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.204891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.213674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.213693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.222997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.223015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.231636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.231656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.241019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.241038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.250813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.250831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.259646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.259664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.269528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.269546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.278426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.278445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.287886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.287905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.196 [2024-11-27 07:52:01.297202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.196 [2024-11-27 07:52:01.297220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.304278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.304298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.315692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.315711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.324569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.324587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.333766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.333784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.343301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.343320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.352045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.352064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.360749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.360768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.369862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.369880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.379130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.379149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.387800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.387819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.396374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.396392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.405709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.405727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.414801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.414819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.424225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.424244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.432893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.432912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.441691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.441709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.450872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.450890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.459917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.459936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 16389.00 IOPS, 128.04 MiB/s [2024-11-27T06:52:01.565Z] [2024-11-27 07:52:01.468963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.468982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.478711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.478729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.487348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.487367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.496485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.496503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.505163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.505181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.514410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.514429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.523560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.523578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.532392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.532410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.541633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.541651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.550993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.551012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.456 [2024-11-27 07:52:01.559849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.456 [2024-11-27 07:52:01.559868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.716 [2024-11-27 07:52:01.568678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.716 [2024-11-27 07:52:01.568697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.716 [2024-11-27 07:52:01.577842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.716 [2024-11-27 07:52:01.577861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.716 [2024-11-27 07:52:01.586495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.716 [2024-11-27 07:52:01.586513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.716 [2024-11-27 07:52:01.595508] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.716 [2024-11-27 07:52:01.595527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.716 [2024-11-27 07:52:01.604834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.716 [2024-11-27 07:52:01.604853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.716 [2024-11-27 07:52:01.613500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.716 [2024-11-27 07:52:01.613520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.716 [2024-11-27 07:52:01.622230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.716 [2024-11-27 07:52:01.622249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.716 [2024-11-27 07:52:01.631613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.716 [2024-11-27 07:52:01.631632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.716 [2024-11-27 07:52:01.640718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.716 [2024-11-27 07:52:01.640737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.716 [2024-11-27 07:52:01.650009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.716 [2024-11-27 07:52:01.650028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.716 [2024-11-27 07:52:01.659145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.716 [2024-11-27 07:52:01.659163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.716 [2024-11-27 07:52:01.667726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.716 [2024-11-27 07:52:01.667744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.716 [2024-11-27 07:52:01.676254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.716 [2024-11-27 07:52:01.676272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.716 [2024-11-27 07:52:01.685655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.716 [2024-11-27 07:52:01.685674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.716 [2024-11-27 07:52:01.694942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.717 [2024-11-27 07:52:01.694971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.717 [2024-11-27 07:52:01.704293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.717 [2024-11-27 07:52:01.704312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.717 [2024-11-27 07:52:01.714144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.717 [2024-11-27 07:52:01.714163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.717 [2024-11-27 07:52:01.723045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.717 [2024-11-27 07:52:01.723064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.717 [2024-11-27 07:52:01.732283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.717 [2024-11-27 07:52:01.732301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.717 [2024-11-27 07:52:01.740838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.717 [2024-11-27 07:52:01.740856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.717 [2024-11-27 07:52:01.750292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.717 [2024-11-27 07:52:01.750312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.717 [2024-11-27 07:52:01.759540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.717 [2024-11-27 07:52:01.759558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.717 [2024-11-27 07:52:01.769349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.717 [2024-11-27 07:52:01.769368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.717 [2024-11-27 07:52:01.778844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.717 [2024-11-27 07:52:01.778863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.717 [2024-11-27 07:52:01.787652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.717 [2024-11-27 07:52:01.787671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.717 [2024-11-27 07:52:01.796727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.717 [2024-11-27 07:52:01.796745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.717 [2024-11-27 07:52:01.805997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.717 [2024-11-27 07:52:01.806016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.717 [2024-11-27 07:52:01.814717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.717 [2024-11-27 07:52:01.814736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.976 [2024-11-27 07:52:01.824126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.976 [2024-11-27 07:52:01.824146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.976 [2024-11-27 07:52:01.833202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.976 [2024-11-27 07:52:01.833222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.976 [2024-11-27 07:52:01.842437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.976 [2024-11-27 07:52:01.842456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.976 [2024-11-27 07:52:01.851600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.976 [2024-11-27 07:52:01.851619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.976 [2024-11-27 07:52:01.861118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.976 [2024-11-27 07:52:01.861136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.976 [2024-11-27 07:52:01.870482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.976 [2024-11-27 07:52:01.870505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.976 [2024-11-27 07:52:01.879566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.976 [2024-11-27 07:52:01.879585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.976 [2024-11-27 07:52:01.888034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.976 [2024-11-27 07:52:01.888053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.976 [2024-11-27 07:52:01.896862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.976 [2024-11-27 07:52:01.896881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.976 [2024-11-27 07:52:01.905381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.976 [2024-11-27 07:52:01.905400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.976 [2024-11-27 07:52:01.914659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.976 [2024-11-27 07:52:01.914678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.976 [2024-11-27 07:52:01.923765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.976 [2024-11-27 07:52:01.923784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.976 [2024-11-27 07:52:01.933373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.976 [2024-11-27 07:52:01.933392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.976 [2024-11-27 07:52:01.942572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.976 [2024-11-27 07:52:01.942590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.976 [2024-11-27 07:52:01.951248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.976 [2024-11-27 07:52:01.951267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.976 [2024-11-27 07:52:01.960834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.976 [2024-11-27 07:52:01.960853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.976 [2024-11-27 07:52:01.969905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.976 [2024-11-27 07:52:01.969923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.976 [2024-11-27 07:52:01.978987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.976 [2024-11-27 07:52:01.979005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.977 [2024-11-27 07:52:01.987589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.977 [2024-11-27 07:52:01.987607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.977 [2024-11-27 07:52:01.996897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.977 [2024-11-27 07:52:01.996915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.977 [2024-11-27 07:52:02.006191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.977 [2024-11-27 07:52:02.006210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.977 [2024-11-27 07:52:02.015429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.977 [2024-11-27 07:52:02.015448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.977 [2024-11-27 07:52:02.024737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.977 [2024-11-27 07:52:02.024756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.977 [2024-11-27 07:52:02.034056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.977 [2024-11-27 07:52:02.034074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.977 [2024-11-27 07:52:02.043456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.977 [2024-11-27 07:52:02.043478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.977 [2024-11-27 07:52:02.052716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.977 [2024-11-27 07:52:02.052735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.977 [2024-11-27 07:52:02.061451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.977 [2024-11-27 07:52:02.061469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.977 [2024-11-27 07:52:02.071424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.977 [2024-11-27 07:52:02.071443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.977 [2024-11-27 07:52:02.081434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.977 [2024-11-27 07:52:02.081455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.236 [2024-11-27 07:52:02.090593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.236 [2024-11-27 07:52:02.090612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.236 [2024-11-27 07:52:02.099971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.236 [2024-11-27 07:52:02.099990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.236 [2024-11-27 07:52:02.109419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.236 [2024-11-27 07:52:02.109437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.236 [2024-11-27 07:52:02.118268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.236 [2024-11-27 07:52:02.118286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.236 [2024-11-27 07:52:02.127451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.236 [2024-11-27 07:52:02.127469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.236 [2024-11-27 07:52:02.136521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.236 [2024-11-27 07:52:02.136538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.146211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.146229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.155094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.155113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.164467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.164485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.173075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.173094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.181892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.181910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.191249] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.191267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.200004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.200023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.209464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.209483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.218779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.218804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.227929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.227953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.237091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.237111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.246501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.246519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.255846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.255865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.264603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.264622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.273358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.273377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.282626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.282645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.292094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.292112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.301312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.301331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.310704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.310722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.319512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.319531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.328825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.328843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.237 [2024-11-27 07:52:02.338183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.237 [2024-11-27 07:52:02.338202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.496 [2024-11-27 07:52:02.347037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.496 [2024-11-27 07:52:02.347058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.496 [2024-11-27 07:52:02.355570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.496 [2024-11-27 07:52:02.355590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.496 [2024-11-27 07:52:02.364813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.496 [2024-11-27 07:52:02.364832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.496 [2024-11-27 07:52:02.374083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.496 [2024-11-27 07:52:02.374111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.496 [2024-11-27 07:52:02.383368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.496 [2024-11-27 07:52:02.383386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.496 [2024-11-27 07:52:02.392491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.496 [2024-11-27 07:52:02.392509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.496 [2024-11-27 07:52:02.401639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.496 [2024-11-27 07:52:02.401658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.496 [2024-11-27 07:52:02.410383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.496 [2024-11-27 07:52:02.410401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.496 [2024-11-27 07:52:02.419792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.496 [2024-11-27 07:52:02.419810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.496 [2024-11-27 07:52:02.427138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.496 [2024-11-27 07:52:02.427156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.496 [2024-11-27 07:52:02.437670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.496 [2024-11-27 07:52:02.437688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.496 [2024-11-27 07:52:02.446539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.496 [2024-11-27 07:52:02.446558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.496 [2024-11-27 07:52:02.455663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.496 [2024-11-27 07:52:02.455682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.496 16533.00 IOPS, 129.16 MiB/s [2024-11-27T06:52:02.605Z] [2024-11-27 07:52:02.465069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.496 [2024-11-27 07:52:02.465088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.496 [2024-11-27 07:52:02.473632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.496 [2024-11-27 07:52:02.473650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.496 [2024-11-27 07:52:02.482570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.496 [2024-11-27 07:52:02.482589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.496 [2024-11-27 07:52:02.491757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.496 [2024-11-27 07:52:02.491776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.497 [2024-11-27 07:52:02.501274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.497 [2024-11-27 07:52:02.501293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.497 [2024-11-27 07:52:02.508186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.497 [2024-11-27 07:52:02.508204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.497 [2024-11-27 07:52:02.519503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.497 [2024-11-27 07:52:02.519522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.497 [2024-11-27 07:52:02.528259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.497 [2024-11-27 07:52:02.528278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.497 [2024-11-27 07:52:02.537496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.497 [2024-11-27 07:52:02.537515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.497 [2024-11-27 07:52:02.546781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.497 [2024-11-27 07:52:02.546799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.497 [2024-11-27 07:52:02.556138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.497 [2024-11-27 07:52:02.556156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.497 [2024-11-27 07:52:02.565419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.497 [2024-11-27 07:52:02.565438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.497 [2024-11-27 07:52:02.574696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.497 [2024-11-27 07:52:02.574716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.497 [2024-11-27 07:52:02.583894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.497 [2024-11-27 07:52:02.583914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.497 [2024-11-27 07:52:02.593165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.497 [2024-11-27 07:52:02.593185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.602551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.602572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.611362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.611382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.620831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.620851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.630055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.630074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.639203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.639222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.647917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.647938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.656585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.656605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.666224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.666244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.674835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.674854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.683466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.683485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.692253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.692273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.701411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.701431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.710051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.710071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.719111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.719131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.728174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.728198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.737403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.737423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.746647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.746666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.755945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.755971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.765316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.765335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.775071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.775091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.782414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.782433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.793091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.793111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.801781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.801800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.756 [2024-11-27 07:52:02.811055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.756 [2024-11-27 07:52:02.811074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.757 [2024-11-27 07:52:02.820589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.757 [2024-11-27 07:52:02.820609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.757 [2024-11-27 07:52:02.830240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.757 [2024-11-27 07:52:02.830259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.757 [2024-11-27 07:52:02.839175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.757 [2024-11-27 07:52:02.839194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.757 [2024-11-27 07:52:02.848414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.757 [2024-11-27 07:52:02.848434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.757 [2024-11-27 07:52:02.857564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.757 [2024-11-27 07:52:02.857584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.015 [2024-11-27 07:52:02.867577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.015 [2024-11-27 07:52:02.867598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.015 [2024-11-27 07:52:02.876252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.015 [2024-11-27 07:52:02.876272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.015 [2024-11-27 07:52:02.886269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.015 [2024-11-27 07:52:02.886288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.015 [2024-11-27 07:52:02.895697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.015 [2024-11-27 07:52:02.895716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:02.904907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:02.904930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:02.914166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:02.914185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:02.923479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:02.923502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:02.932146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:02.932165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:02.941371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:02.941391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:02.951141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:02.951160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:02.959943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:02.959968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:02.968480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:02.968498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:02.977741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:02.977760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:02.986318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:02.986336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:02.995599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:02.995618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:03.004859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:03.004877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:03.013544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:03.013563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:03.022907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:03.022925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:03.031690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:03.031709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:03.041114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:03.041133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:03.050435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:03.050454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:03.059639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:03.059656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:03.068371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:03.068390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:03.077633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:03.077655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:03.087638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:03.087656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:03.096509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:03.096527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:03.105124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:03.105144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.016 [2024-11-27 07:52:03.114361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.016 [2024-11-27 07:52:03.114379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.276 [2024-11-27 07:52:03.123034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.276 [2024-11-27 07:52:03.123054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.276 [2024-11-27 07:52:03.132489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.276 [2024-11-27 07:52:03.132508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.276 [2024-11-27 07:52:03.141767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.276 [2024-11-27 07:52:03.141785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.276 [2024-11-27 07:52:03.151562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.276 [2024-11-27 07:52:03.151580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.276 [2024-11-27 07:52:03.160399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.276 [2024-11-27 07:52:03.160417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.276 [2024-11-27 07:52:03.169583] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.276 [2024-11-27 07:52:03.169602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.276 [2024-11-27 07:52:03.179126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.276 [2024-11-27 07:52:03.179146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.276 [2024-11-27 07:52:03.188653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.276 [2024-11-27 07:52:03.188671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.276 [2024-11-27 07:52:03.198216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.276 [2024-11-27 07:52:03.198234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.276 [2024-11-27 07:52:03.206926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.276 [2024-11-27 07:52:03.206945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.276 [2024-11-27 07:52:03.216114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.276 [2024-11-27 07:52:03.216133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.276 [2024-11-27 07:52:03.225640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.276 [2024-11-27 07:52:03.225659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.276 [2024-11-27 07:52:03.234941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.276 [2024-11-27 07:52:03.234965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.276 [2024-11-27 07:52:03.244457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.276 [2024-11-27 07:52:03.244476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.276 [2024-11-27 07:52:03.253127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.276 [2024-11-27 07:52:03.253151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.277 [2024-11-27 07:52:03.261656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.277 [2024-11-27 07:52:03.261675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.277 [2024-11-27 07:52:03.271393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.277 [2024-11-27 07:52:03.271411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.277 [2024-11-27 07:52:03.280005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.277 [2024-11-27 07:52:03.280023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.277 [2024-11-27 07:52:03.288670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.277 [2024-11-27 07:52:03.288689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.277 [2024-11-27 07:52:03.297468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.277 [2024-11-27 07:52:03.297486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.277 [2024-11-27 07:52:03.306748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.277 [2024-11-27 07:52:03.306767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.277 [2024-11-27 07:52:03.315888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.277 [2024-11-27 07:52:03.315908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.277 [2024-11-27 07:52:03.325652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.277 [2024-11-27 07:52:03.325670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.277 [2024-11-27 07:52:03.334326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.277 [2024-11-27 07:52:03.334345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.277 [2024-11-27 07:52:03.343879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.277 [2024-11-27 07:52:03.343897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.277 [2024-11-27 07:52:03.353117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.277 [2024-11-27 07:52:03.353136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.277 [2024-11-27 07:52:03.362288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.277 [2024-11-27 07:52:03.362308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.277 [2024-11-27 07:52:03.370839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.277 [2024-11-27 07:52:03.370857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.277 [2024-11-27 07:52:03.379593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.277 [2024-11-27 07:52:03.379612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.388478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.388497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.397635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.397654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.406235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.406254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.414767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.414786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.424139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.424162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.433279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.433297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.442371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.442390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.452014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.452033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.460838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.460856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 16571.00 IOPS, 129.46 MiB/s [2024-11-27T06:52:03.646Z] [2024-11-27 07:52:03.470051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.470070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.479820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.479838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.488451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.488469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.498229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.498248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.506902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.506921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.516091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.516121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.525328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.525347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.534579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.534597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.543817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.543836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.553108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.553126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.562530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.562548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.571862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.571882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.581834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.581853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.591051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.591069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.599685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.599703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.608222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.608240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.617537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.617555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.626784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.626802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.537 [2024-11-27 07:52:03.635996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.537 [2024-11-27 07:52:03.636014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.796 [2024-11-27 07:52:03.645657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.796 [2024-11-27 07:52:03.645676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.796 [2024-11-27 07:52:03.654310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.796 [2024-11-27 07:52:03.654330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.796 [2024-11-27 07:52:03.663349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.796 [2024-11-27 07:52:03.663369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.672580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.672600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.681793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.681812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.691155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.691174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.700636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.700655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.709331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.709350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.718552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.718571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.727791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.727811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.737175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.737194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.746552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.746570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.755270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.755289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.764011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.764030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.773329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.773347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.782550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.782568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.791820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.791838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.801052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.801070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.810348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.810366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.819631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.819650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.828831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.828849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.837599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.837618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.846747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.846766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.855881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.855900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.865167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.865185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.874272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.874290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.883557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.883575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.892102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.892120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.797 [2024-11-27 07:52:03.900986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.797 [2024-11-27 07:52:03.901005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:03.909787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:03.909807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:03.919104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:03.919122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:03.928419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:03.928438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:03.937720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:03.937744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:03.946383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:03.946401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:03.955781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:03.955799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:03.965246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:03.965266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:03.974113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:03.974132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:03.983239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:03.983259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:03.993108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:03.993127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:04.001934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:04.001960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:04.011507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:04.011526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:04.021022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:04.021041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:04.029783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:04.029802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:04.039383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:04.039402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:04.048425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:04.048443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:04.057783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:04.057802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:04.064791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:04.064810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:04.076038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:04.076056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:04.085354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:04.085373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:04.094328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:04.094346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:04.103366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:04.103384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:04.112652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:04.112675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:04.121960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:04.121979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:04.131033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:04.131052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.057 [2024-11-27 07:52:04.139651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.057 [2024-11-27 07:52:04.139669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.058 [2024-11-27 07:52:04.148821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.058 [2024-11-27 07:52:04.148840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.058 [2024-11-27 07:52:04.157495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.058 [2024-11-27 07:52:04.157514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.317 [2024-11-27 07:52:04.166531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.317 [2024-11-27 07:52:04.166551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.317 [2024-11-27 07:52:04.175232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.317 [2024-11-27 07:52:04.175250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.317 [2024-11-27 07:52:04.184802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.317 [2024-11-27 07:52:04.184822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.317 [2024-11-27 07:52:04.194144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.317 [2024-11-27 07:52:04.194163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.317 [2024-11-27 07:52:04.203111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.203130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.212577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.212596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.221477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.221496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.230104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.230122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.238871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.238890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.248289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.248307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.257038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.257057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.266193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.266213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.275509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.275528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.284778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.284801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.294020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.294038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.302692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.302710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.311388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.311406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.320391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.320410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.329215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.329233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.338436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.338454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.347670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.347689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.357036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.357055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.365748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.365766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.375072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.375090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.384392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.384410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.393558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.393577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.411419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.411437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.318 [2024-11-27 07:52:04.420800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.318 [2024-11-27 07:52:04.420819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.577 [2024-11-27 07:52:04.430228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.577 [2024-11-27 07:52:04.430248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.577 [2024-11-27 07:52:04.439611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.577 [2024-11-27 07:52:04.439629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.577 [2024-11-27 07:52:04.448312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.577 [2024-11-27 07:52:04.448331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.577 [2024-11-27 07:52:04.457465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.577 [2024-11-27 07:52:04.457484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.577 [2024-11-27 07:52:04.467302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.577 [2024-11-27 07:52:04.467325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.577 16594.75 IOPS, 129.65 MiB/s [2024-11-27T06:52:04.686Z] [2024-11-27 07:52:04.476110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.476129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.484799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.484817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.491760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.491778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.503107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.503126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.512039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.512057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.521427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.521446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.530439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.530457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.539629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.539647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.548808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.548827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.558213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.558232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.567420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.567438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.576598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.576617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.585959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.585978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.595261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.595280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.604589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.604608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.613435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.613453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.622134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.622153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.630733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.630751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.640132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.640150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.649442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.649460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.658094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.658112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.667161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.667179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.578 [2024-11-27 07:52:04.676666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.578 [2024-11-27 07:52:04.676684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.836 [2024-11-27 07:52:04.686032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.836 [2024-11-27 07:52:04.686052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.694873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.694893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.703636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.703656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.712185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.712203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.720894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.720914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.730281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.730300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.738982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.739001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.747726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.747744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.757476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.757494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.766935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.766961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.776101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.776120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.784713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.784731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.794009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.794027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.803100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.803118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.811862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.811880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.821418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.821436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.830126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.830144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.838723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.838742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.847916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.847935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.857136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.857155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.866391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.866410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.875530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.875548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.883978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.883996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.892506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.892524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.901903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.901921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.911071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.911089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.920388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.920407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.929685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.929703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.837 [2024-11-27 07:52:04.938330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.837 [2024-11-27 07:52:04.938349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:04.947334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:04.947354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:04.956546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:04.956565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:04.965678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:04.965697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:04.975137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:04.975157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:04.984501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:04.984520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:04.993749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:04.993768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.003604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.003623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.012481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.012500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.021896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.021914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.030452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.030471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.039697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.039716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.049065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.049083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.058268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.058287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.067547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.067565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.076572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.076591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.085753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.085772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.095088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.095106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.104345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.104364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.113222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.113239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.122413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.122432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.131686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.131704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.141030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.141049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.149656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.149679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.158826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.158844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.168220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.168239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.177476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.177495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.186763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.186782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.096 [2024-11-27 07:52:05.195970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.096 [2024-11-27 07:52:05.195989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.205594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.205614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.214790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.214809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.224232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.224250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.231267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.231285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.242033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.242052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.250922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.250940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.259579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.259598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.268375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.268393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.277635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.277653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.286345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.286364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.295090] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.295108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.304170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.304189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.313606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.313624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.322909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.322933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.332307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.332325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.341493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.341511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.350698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.350721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.359817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.359836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.369593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.369612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.378182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.378202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.387658] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.387679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.396929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.396954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.405547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.405565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.414331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.414349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.423129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.423148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.431878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.431896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.441142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.441161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.450390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.450409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.356 [2024-11-27 07:52:05.459879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.356 [2024-11-27 07:52:05.459898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.468761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.468781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 16598.60 IOPS, 129.68 MiB/s [2024-11-27T06:52:05.725Z] [2024-11-27 07:52:05.475304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.475322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 00:09:11.616 Latency(us) 00:09:11.616 [2024-11-27T06:52:05.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.616 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:11.616 Nvme1n1 : 5.01 16601.89 129.70 0.00 0.00 7703.08 3177.07 15158.76 00:09:11.616 [2024-11-27T06:52:05.725Z] =================================================================================================================== 00:09:11.616 [2024-11-27T06:52:05.725Z] Total : 16601.89 129.70 0.00 0.00 7703.08 3177.07 15158.76 00:09:11.616 [2024-11-27 07:52:05.483312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.483327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.491332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.491346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.499354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.499366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.507388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.507405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.515400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.515413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.523418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.523430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.531439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.531452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.539456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.539468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.547480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.547492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.555501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.555513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.563520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.563532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.571543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.571558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.579565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.579579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.587584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.587595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.595604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.595614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.603626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.603637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.611649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.611662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.619670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.619681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.627688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.627698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 [2024-11-27 07:52:05.635712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.616 [2024-11-27 07:52:05.635722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2327271) - No such process 00:09:11.616 07:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2327271 00:09:11.616 07:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.616 07:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.616 07:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.616 07:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.616 07:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:11.616 07:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.616 07:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.616 delay0 00:09:11.616 07:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.616 07:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:11.616 07:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.616 07:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.616 07:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.616 07:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:11.875 [2024-11-27 07:52:05.756627] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:19.995 [2024-11-27 07:52:12.659780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6070 is same with the state(6) to be set 00:09:19.995 Initializing NVMe Controllers 00:09:19.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:19.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:19.995 Initialization complete. Launching workers. 00:09:19.995 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 6177 00:09:19.995 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 6453, failed to submit 44 00:09:19.995 success 6299, unsuccessful 154, failed 0 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:19.995 rmmod nvme_tcp 00:09:19.995 rmmod nvme_fabrics 00:09:19.995 rmmod nvme_keyring 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2325504 ']' 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2325504 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2325504 ']' 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2325504 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2325504 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2325504' 00:09:19.995 killing process with pid 2325504 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2325504 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2325504 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.995 07:52:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.933 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:20.933 00:09:20.933 real 0m31.379s 00:09:20.933 user 0m42.508s 00:09:20.933 sys 0m10.690s 00:09:20.933 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.933 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.933 ************************************ 00:09:20.933 END TEST nvmf_zcopy 00:09:20.933 ************************************ 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.192 ************************************ 00:09:21.192 START TEST nvmf_nmic 00:09:21.192 ************************************ 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:21.192 * Looking for test storage... 00:09:21.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:21.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.192 --rc genhtml_branch_coverage=1 00:09:21.192 --rc genhtml_function_coverage=1 00:09:21.192 --rc genhtml_legend=1 00:09:21.192 --rc geninfo_all_blocks=1 00:09:21.192 --rc geninfo_unexecuted_blocks=1 00:09:21.192 00:09:21.192 ' 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:21.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.192 --rc genhtml_branch_coverage=1 00:09:21.192 --rc genhtml_function_coverage=1 00:09:21.192 --rc genhtml_legend=1 00:09:21.192 --rc geninfo_all_blocks=1 00:09:21.192 --rc geninfo_unexecuted_blocks=1 00:09:21.192 00:09:21.192 ' 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:21.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.192 --rc genhtml_branch_coverage=1 00:09:21.192 --rc genhtml_function_coverage=1 00:09:21.192 --rc genhtml_legend=1 00:09:21.192 --rc geninfo_all_blocks=1 00:09:21.192 --rc geninfo_unexecuted_blocks=1 00:09:21.192 00:09:21.192 ' 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:21.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.192 --rc genhtml_branch_coverage=1 00:09:21.192 --rc genhtml_function_coverage=1 00:09:21.192 --rc genhtml_legend=1 00:09:21.192 --rc geninfo_all_blocks=1 00:09:21.192 --rc geninfo_unexecuted_blocks=1 00:09:21.192 00:09:21.192 ' 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.192 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.193 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.451 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.451 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:21.451 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:21.451 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:21.451 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.451 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:21.451 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:21.451 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:21.451 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.451 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.451 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.451 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:21.451 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:21.451 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:21.451 07:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:26.718 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:26.718 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:26.718 Found net devices under 0000:86:00.0: cvl_0_0 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:26.718 Found net devices under 0000:86:00.1: cvl_0_1 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:26.718 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:26.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:09:26.719 00:09:26.719 --- 10.0.0.2 ping statistics --- 00:09:26.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.719 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:09:26.719 00:09:26.719 --- 10.0.0.1 ping statistics --- 00:09:26.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.719 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2332784 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2332784 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2332784 ']' 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.719 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.719 [2024-11-27 07:52:20.651576] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:09:26.719 [2024-11-27 07:52:20.651630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.719 [2024-11-27 07:52:20.721096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.719 [2024-11-27 07:52:20.767230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.719 [2024-11-27 07:52:20.767264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.719 [2024-11-27 07:52:20.767272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.719 [2024-11-27 07:52:20.767278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.719 [2024-11-27 07:52:20.767284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.719 [2024-11-27 07:52:20.768722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.719 [2024-11-27 07:52:20.768822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.719 [2024-11-27 07:52:20.768833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.719 [2024-11-27 07:52:20.768835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.978 [2024-11-27 07:52:20.915326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.978 Malloc0 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.978 [2024-11-27 07:52:20.984109] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.978 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:26.978 test case1: single bdev can't be used in multiple subsystems 00:09:26.979 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:26.979 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.979 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.979 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.979 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:26.979 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.979 07:52:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.979 07:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.979 07:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:26.979 07:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:26.979 07:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.979 07:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.979 [2024-11-27 07:52:21.007987] bdev.c:8507:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:26.979 [2024-11-27 07:52:21.008009] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:26.979 [2024-11-27 07:52:21.008016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.979 request: 00:09:26.979 { 00:09:26.979 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:26.979 "namespace": { 00:09:26.979 "bdev_name": "Malloc0", 00:09:26.979 "no_auto_visible": false, 00:09:26.979 "hide_metadata": false 00:09:26.979 }, 00:09:26.979 "method": "nvmf_subsystem_add_ns", 00:09:26.979 "req_id": 1 00:09:26.979 } 00:09:26.979 Got JSON-RPC error response 00:09:26.979 response: 00:09:26.979 { 00:09:26.979 "code": -32602, 00:09:26.979 "message": "Invalid parameters" 00:09:26.979 } 00:09:26.979 07:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:26.979 07:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:26.979 07:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:26.979 07:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:26.979 Adding namespace failed - expected result. 00:09:26.979 07:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:26.979 test case2: host connect to nvmf target in multiple paths 00:09:26.979 07:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:26.979 07:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.979 07:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.979 [2024-11-27 07:52:21.020153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:26.979 07:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.979 07:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:28.355 07:52:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:29.291 07:52:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:29.291 07:52:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:29.291 07:52:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:29.292 07:52:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:29.292 07:52:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:31.197 07:52:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:31.197 07:52:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:31.197 07:52:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:31.197 07:52:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:31.197 07:52:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:31.197 07:52:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:31.197 07:52:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:31.197 [global] 00:09:31.197 thread=1 00:09:31.197 invalidate=1 00:09:31.197 rw=write 00:09:31.197 time_based=1 00:09:31.197 runtime=1 00:09:31.197 ioengine=libaio 00:09:31.197 direct=1 00:09:31.197 bs=4096 00:09:31.197 iodepth=1 00:09:31.197 norandommap=0 00:09:31.197 numjobs=1 00:09:31.197 00:09:31.197 verify_dump=1 00:09:31.197 verify_backlog=512 00:09:31.197 verify_state_save=0 00:09:31.197 do_verify=1 00:09:31.197 verify=crc32c-intel 00:09:31.197 [job0] 00:09:31.197 filename=/dev/nvme0n1 00:09:31.197 Could not set queue depth (nvme0n1) 00:09:31.455 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.455 fio-3.35 00:09:31.455 Starting 1 thread 00:09:32.833 00:09:32.833 job0: (groupid=0, jobs=1): err= 0: pid=2333829: Wed Nov 27 07:52:26 2024 00:09:32.833 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:32.833 slat (nsec): min=6877, max=42957, avg=8039.46, stdev=1581.07 00:09:32.833 clat (usec): min=200, max=40583, avg=287.42, stdev=892.04 00:09:32.833 lat (usec): min=208, max=40591, avg=295.46, stdev=892.04 00:09:32.833 clat percentiles (usec): 00:09:32.833 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 233], 00:09:32.833 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:09:32.833 | 70.00th=[ 269], 80.00th=[ 330], 90.00th=[ 334], 95.00th=[ 334], 00:09:32.833 | 99.00th=[ 404], 99.50th=[ 433], 99.90th=[ 465], 99.95th=[ 816], 00:09:32.833 | 99.99th=[40633] 00:09:32.833 write: IOPS=2133, BW=8535KiB/s (8740kB/s)(8544KiB/1001msec); 0 zone resets 00:09:32.833 slat (nsec): min=10008, max=46204, avg=11207.00, stdev=2041.70 00:09:32.833 clat (usec): min=121, max=831, avg=167.80, stdev=32.06 00:09:32.833 lat (usec): min=132, max=843, avg=179.01, stdev=32.24 00:09:32.833 clat percentiles (usec): 00:09:32.833 | 1.00th=[ 131], 5.00th=[ 141], 10.00th=[ 149], 20.00th=[ 155], 00:09:32.833 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:09:32.833 | 70.00th=[ 172], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 192], 00:09:32.833 | 99.00th=[ 306], 99.50th=[ 306], 99.90th=[ 676], 99.95th=[ 725], 00:09:32.833 | 99.99th=[ 832] 00:09:32.833 bw ( KiB/s): min= 8504, max= 8504, per=99.63%, avg=8504.00, stdev= 0.00, samples=1 00:09:32.833 iops : min= 2126, max= 2126, avg=2126.00, stdev= 0.00, samples=1 00:09:32.833 lat (usec) : 250=70.94%, 500=28.92%, 750=0.07%, 1000=0.05% 00:09:32.833 lat (msec) : 50=0.02% 00:09:32.833 cpu : usr=3.60%, sys=6.40%, ctx=4184, majf=0, minf=1 00:09:32.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.833 issued rwts: total=2048,2136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.833 00:09:32.833 Run status group 0 (all jobs): 00:09:32.833 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:09:32.833 WRITE: bw=8535KiB/s (8740kB/s), 8535KiB/s-8535KiB/s (8740kB/s-8740kB/s), io=8544KiB (8749kB), run=1001-1001msec 00:09:32.833 00:09:32.833 Disk stats (read/write): 00:09:32.833 nvme0n1: ios=1861/2048, merge=0/0, ticks=509/323, in_queue=832, util=91.38% 00:09:32.833 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:32.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:32.833 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:32.833 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:32.833 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:32.833 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.833 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:32.833 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.833 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:32.833 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:32.833 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:32.833 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:32.833 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:32.833 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:32.833 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:32.833 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:32.833 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:32.833 rmmod nvme_tcp 00:09:32.833 rmmod nvme_fabrics 00:09:33.092 rmmod nvme_keyring 00:09:33.092 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:33.092 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:33.092 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:33.092 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2332784 ']' 00:09:33.092 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2332784 00:09:33.092 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2332784 ']' 00:09:33.092 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2332784 00:09:33.092 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:33.092 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.092 07:52:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2332784 00:09:33.092 07:52:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.092 07:52:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.092 07:52:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2332784' 00:09:33.092 killing process with pid 2332784 00:09:33.092 07:52:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2332784 00:09:33.092 07:52:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2332784 00:09:33.351 07:52:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:33.351 07:52:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:33.351 07:52:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:33.351 07:52:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:33.351 07:52:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:33.351 07:52:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:33.351 07:52:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:33.351 07:52:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:33.351 07:52:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:33.351 07:52:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.351 07:52:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.351 07:52:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.263 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:35.263 00:09:35.263 real 0m14.211s 00:09:35.263 user 0m32.236s 00:09:35.263 sys 0m4.801s 00:09:35.263 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.263 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.263 ************************************ 00:09:35.263 END TEST nvmf_nmic 00:09:35.263 ************************************ 00:09:35.263 07:52:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:35.263 07:52:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:35.264 07:52:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.264 07:52:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.529 ************************************ 00:09:35.530 START TEST nvmf_fio_target 00:09:35.530 ************************************ 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:35.530 * Looking for test storage... 00:09:35.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:35.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.530 --rc genhtml_branch_coverage=1 00:09:35.530 --rc genhtml_function_coverage=1 00:09:35.530 --rc genhtml_legend=1 00:09:35.530 --rc geninfo_all_blocks=1 00:09:35.530 --rc geninfo_unexecuted_blocks=1 00:09:35.530 00:09:35.530 ' 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:35.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.530 --rc genhtml_branch_coverage=1 00:09:35.530 --rc genhtml_function_coverage=1 00:09:35.530 --rc genhtml_legend=1 00:09:35.530 --rc geninfo_all_blocks=1 00:09:35.530 --rc geninfo_unexecuted_blocks=1 00:09:35.530 00:09:35.530 ' 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:35.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.530 --rc genhtml_branch_coverage=1 00:09:35.530 --rc genhtml_function_coverage=1 00:09:35.530 --rc genhtml_legend=1 00:09:35.530 --rc geninfo_all_blocks=1 00:09:35.530 --rc geninfo_unexecuted_blocks=1 00:09:35.530 00:09:35.530 ' 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:35.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.530 --rc genhtml_branch_coverage=1 00:09:35.530 --rc genhtml_function_coverage=1 00:09:35.530 --rc genhtml_legend=1 00:09:35.530 --rc geninfo_all_blocks=1 00:09:35.530 --rc geninfo_unexecuted_blocks=1 00:09:35.530 00:09:35.530 ' 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.530 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.531 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.531 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.531 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.531 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.531 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.531 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:35.531 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:35.531 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.531 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:35.531 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:35.531 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:35.531 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.531 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.531 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.531 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:35.531 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:35.531 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:35.531 07:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:40.802 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:40.803 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:40.803 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:40.803 Found net devices under 0000:86:00.0: cvl_0_0 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:40.803 Found net devices under 0000:86:00.1: cvl_0_1 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:40.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:09:40.803 00:09:40.803 --- 10.0.0.2 ping statistics --- 00:09:40.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.803 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:40.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:09:40.803 00:09:40.803 --- 10.0.0.1 ping statistics --- 00:09:40.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.803 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:40.803 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:40.804 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:40.804 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.804 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2337502 00:09:40.804 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:40.804 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2337502 00:09:40.804 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2337502 ']' 00:09:40.804 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.804 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.804 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.804 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.804 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.804 [2024-11-27 07:52:34.614771] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:09:40.804 [2024-11-27 07:52:34.614819] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.804 [2024-11-27 07:52:34.683008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.804 [2024-11-27 07:52:34.723975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.804 [2024-11-27 07:52:34.724014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.804 [2024-11-27 07:52:34.724021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.804 [2024-11-27 07:52:34.724027] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.804 [2024-11-27 07:52:34.724032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.804 [2024-11-27 07:52:34.725562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.804 [2024-11-27 07:52:34.725662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.804 [2024-11-27 07:52:34.725746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.804 [2024-11-27 07:52:34.725747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.804 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.804 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:40.804 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:40.804 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:40.804 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.804 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.804 07:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:41.063 [2024-11-27 07:52:35.040286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.063 07:52:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:41.323 07:52:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:41.323 07:52:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:41.584 07:52:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:41.584 07:52:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:41.843 07:52:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:41.843 07:52:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:41.843 07:52:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:41.843 07:52:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:42.102 07:52:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:42.361 07:52:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:42.361 07:52:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:42.621 07:52:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:42.621 07:52:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:42.879 07:52:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:42.879 07:52:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:42.879 07:52:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:43.139 07:52:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:43.139 07:52:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.398 07:52:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:43.398 07:52:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:43.658 07:52:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.658 [2024-11-27 07:52:37.719150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.658 07:52:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:43.917 07:52:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:44.176 07:52:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:45.555 07:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:45.555 07:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:45.555 07:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:45.555 07:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:45.555 07:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:45.555 07:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:47.459 07:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:47.459 07:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:47.459 07:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:47.459 07:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:47.459 07:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:47.459 07:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:47.459 07:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:47.459 [global] 00:09:47.459 thread=1 00:09:47.459 invalidate=1 00:09:47.459 rw=write 00:09:47.459 time_based=1 00:09:47.459 runtime=1 00:09:47.459 ioengine=libaio 00:09:47.459 direct=1 00:09:47.459 bs=4096 00:09:47.459 iodepth=1 00:09:47.459 norandommap=0 00:09:47.459 numjobs=1 00:09:47.459 00:09:47.459 verify_dump=1 00:09:47.459 verify_backlog=512 00:09:47.459 verify_state_save=0 00:09:47.459 do_verify=1 00:09:47.459 verify=crc32c-intel 00:09:47.459 [job0] 00:09:47.459 filename=/dev/nvme0n1 00:09:47.459 [job1] 00:09:47.459 filename=/dev/nvme0n2 00:09:47.459 [job2] 00:09:47.459 filename=/dev/nvme0n3 00:09:47.459 [job3] 00:09:47.459 filename=/dev/nvme0n4 00:09:47.459 Could not set queue depth (nvme0n1) 00:09:47.459 Could not set queue depth (nvme0n2) 00:09:47.460 Could not set queue depth (nvme0n3) 00:09:47.460 Could not set queue depth (nvme0n4) 00:09:47.719 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.719 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.719 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.719 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.719 fio-3.35 00:09:47.719 Starting 4 threads 00:09:49.096 00:09:49.096 job0: (groupid=0, jobs=1): err= 0: pid=2338941: Wed Nov 27 07:52:42 2024 00:09:49.096 read: IOPS=21, BW=87.9KiB/s (90.0kB/s)(88.0KiB/1001msec) 00:09:49.096 slat (nsec): min=10203, max=24929, avg=22158.00, stdev=2865.15 00:09:49.096 clat (usec): min=40870, max=41074, avg=40969.50, stdev=38.83 00:09:49.096 lat (usec): min=40893, max=41085, avg=40991.66, stdev=37.29 00:09:49.096 clat percentiles (usec): 00:09:49.096 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:49.096 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:49.096 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:49.096 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:49.096 | 99.99th=[41157] 00:09:49.096 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:49.096 slat (usec): min=10, max=765, avg=14.13, stdev=34.55 00:09:49.096 clat (usec): min=138, max=686, avg=175.31, stdev=36.88 00:09:49.096 lat (usec): min=149, max=1073, avg=189.44, stdev=55.19 00:09:49.096 clat percentiles (usec): 00:09:49.096 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:09:49.096 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:09:49.096 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 225], 00:09:49.096 | 99.00th=[ 289], 99.50th=[ 367], 99.90th=[ 685], 99.95th=[ 685], 00:09:49.096 | 99.99th=[ 685] 00:09:49.096 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:09:49.096 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:49.096 lat (usec) : 250=92.88%, 500=2.81%, 750=0.19% 00:09:49.096 lat (msec) : 50=4.12% 00:09:49.096 cpu : usr=0.20%, sys=1.10%, ctx=538, majf=0, minf=1 00:09:49.096 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.096 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.096 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.096 job1: (groupid=0, jobs=1): err= 0: pid=2338942: Wed Nov 27 07:52:42 2024 00:09:49.096 read: IOPS=21, BW=86.2KiB/s (88.3kB/s)(88.0KiB/1021msec) 00:09:49.096 slat (nsec): min=9651, max=23038, avg=22103.14, stdev=2790.58 00:09:49.096 clat (usec): min=40674, max=42028, avg=41312.16, stdev=514.88 00:09:49.096 lat (usec): min=40684, max=42051, avg=41334.27, stdev=515.70 00:09:49.096 clat percentiles (usec): 00:09:49.096 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:09:49.096 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:49.096 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:49.096 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:49.096 | 99.99th=[42206] 00:09:49.096 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:09:49.096 slat (nsec): min=9625, max=61477, avg=13182.26, stdev=3117.86 00:09:49.096 clat (usec): min=123, max=379, avg=201.05, stdev=37.34 00:09:49.096 lat (usec): min=134, max=440, avg=214.24, stdev=38.47 00:09:49.096 clat percentiles (usec): 00:09:49.096 | 1.00th=[ 128], 5.00th=[ 141], 10.00th=[ 149], 20.00th=[ 165], 00:09:49.096 | 30.00th=[ 178], 40.00th=[ 190], 50.00th=[ 204], 60.00th=[ 219], 00:09:49.096 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 253], 00:09:49.096 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 379], 99.95th=[ 379], 00:09:49.096 | 99.99th=[ 379] 00:09:49.096 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:09:49.096 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:49.096 lat (usec) : 250=90.26%, 500=5.62% 00:09:49.096 lat (msec) : 50=4.12% 00:09:49.096 cpu : usr=0.29%, sys=0.59%, ctx=535, majf=0, minf=1 00:09:49.096 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.096 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.096 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.096 job2: (groupid=0, jobs=1): err= 0: pid=2338943: Wed Nov 27 07:52:42 2024 00:09:49.096 read: IOPS=376, BW=1506KiB/s (1543kB/s)(1508KiB/1001msec) 00:09:49.096 slat (nsec): min=6895, max=26852, avg=8476.45, stdev=3669.63 00:09:49.096 clat (usec): min=254, max=41004, avg=2329.40, stdev=8909.87 00:09:49.096 lat (usec): min=262, max=41028, avg=2337.88, stdev=8913.26 00:09:49.096 clat percentiles (usec): 00:09:49.096 | 1.00th=[ 255], 5.00th=[ 262], 10.00th=[ 265], 20.00th=[ 269], 00:09:49.096 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 277], 60.00th=[ 281], 00:09:49.096 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[40633], 00:09:49.096 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:49.096 | 99.99th=[41157] 00:09:49.096 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:49.096 slat (nsec): min=11975, max=38568, avg=13356.35, stdev=1914.61 00:09:49.096 clat (usec): min=130, max=369, avg=213.30, stdev=44.83 00:09:49.096 lat (usec): min=143, max=384, avg=226.65, stdev=45.32 00:09:49.096 clat percentiles (usec): 00:09:49.096 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 161], 00:09:49.096 | 30.00th=[ 182], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 239], 00:09:49.096 | 70.00th=[ 239], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 262], 00:09:49.096 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 371], 99.95th=[ 371], 00:09:49.096 | 99.99th=[ 371] 00:09:49.096 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:09:49.096 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:49.096 lat (usec) : 250=52.98%, 500=44.88% 00:09:49.096 lat (msec) : 50=2.14% 00:09:49.096 cpu : usr=1.00%, sys=1.00%, ctx=891, majf=0, minf=1 00:09:49.096 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.096 issued rwts: total=377,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.096 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.096 job3: (groupid=0, jobs=1): err= 0: pid=2338944: Wed Nov 27 07:52:42 2024 00:09:49.096 read: IOPS=22, BW=88.4KiB/s (90.5kB/s)(92.0KiB/1041msec) 00:09:49.096 slat (nsec): min=11580, max=29088, avg=21604.30, stdev=3858.87 00:09:49.096 clat (usec): min=40843, max=41915, avg=41001.26, stdev=204.90 00:09:49.096 lat (usec): min=40854, max=41944, avg=41022.86, stdev=206.95 00:09:49.096 clat percentiles (usec): 00:09:49.096 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:49.096 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:49.096 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:49.096 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:49.096 | 99.99th=[41681] 00:09:49.097 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:09:49.097 slat (nsec): min=8093, max=39713, avg=10540.35, stdev=3385.02 00:09:49.097 clat (usec): min=135, max=342, avg=175.51, stdev=24.76 00:09:49.097 lat (usec): min=144, max=381, avg=186.05, stdev=27.37 00:09:49.097 clat percentiles (usec): 00:09:49.097 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:09:49.097 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:09:49.097 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 206], 95.00th=[ 225], 00:09:49.097 | 99.00th=[ 262], 99.50th=[ 314], 99.90th=[ 343], 99.95th=[ 343], 00:09:49.097 | 99.99th=[ 343] 00:09:49.097 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:09:49.097 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:49.097 lat (usec) : 250=94.39%, 500=1.31% 00:09:49.097 lat (msec) : 50=4.30% 00:09:49.097 cpu : usr=0.29%, sys=0.77%, ctx=537, majf=0, minf=1 00:09:49.097 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.097 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.097 00:09:49.097 Run status group 0 (all jobs): 00:09:49.097 READ: bw=1706KiB/s (1747kB/s), 86.2KiB/s-1506KiB/s (88.3kB/s-1543kB/s), io=1776KiB (1819kB), run=1001-1041msec 00:09:49.097 WRITE: bw=7869KiB/s (8058kB/s), 1967KiB/s-2046KiB/s (2015kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1041msec 00:09:49.097 00:09:49.097 Disk stats (read/write): 00:09:49.097 nvme0n1: ios=77/512, merge=0/0, ticks=951/86, in_queue=1037, util=97.90% 00:09:49.097 nvme0n2: ios=32/512, merge=0/0, ticks=720/101, in_queue=821, util=86.98% 00:09:49.097 nvme0n3: ios=40/512, merge=0/0, ticks=1653/109, in_queue=1762, util=98.44% 00:09:49.097 nvme0n4: ios=42/512, merge=0/0, ticks=1722/85, in_queue=1807, util=98.32% 00:09:49.097 07:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:49.097 [global] 00:09:49.097 thread=1 00:09:49.097 invalidate=1 00:09:49.097 rw=randwrite 00:09:49.097 time_based=1 00:09:49.097 runtime=1 00:09:49.097 ioengine=libaio 00:09:49.097 direct=1 00:09:49.097 bs=4096 00:09:49.097 iodepth=1 00:09:49.097 norandommap=0 00:09:49.097 numjobs=1 00:09:49.097 00:09:49.097 verify_dump=1 00:09:49.097 verify_backlog=512 00:09:49.097 verify_state_save=0 00:09:49.097 do_verify=1 00:09:49.097 verify=crc32c-intel 00:09:49.097 [job0] 00:09:49.097 filename=/dev/nvme0n1 00:09:49.097 [job1] 00:09:49.097 filename=/dev/nvme0n2 00:09:49.097 [job2] 00:09:49.097 filename=/dev/nvme0n3 00:09:49.097 [job3] 00:09:49.097 filename=/dev/nvme0n4 00:09:49.097 Could not set queue depth (nvme0n1) 00:09:49.097 Could not set queue depth (nvme0n2) 00:09:49.097 Could not set queue depth (nvme0n3) 00:09:49.097 Could not set queue depth (nvme0n4) 00:09:49.357 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.357 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.358 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.358 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.358 fio-3.35 00:09:49.358 Starting 4 threads 00:09:50.743 00:09:50.743 job0: (groupid=0, jobs=1): err= 0: pid=2339310: Wed Nov 27 07:52:44 2024 00:09:50.743 read: IOPS=1297, BW=5191KiB/s (5315kB/s)(5196KiB/1001msec) 00:09:50.743 slat (nsec): min=7361, max=41336, avg=8609.30, stdev=1683.94 00:09:50.743 clat (usec): min=202, max=41064, avg=523.82, stdev=2508.70 00:09:50.743 lat (usec): min=211, max=41089, avg=532.43, stdev=2509.10 00:09:50.743 clat percentiles (usec): 00:09:50.743 | 1.00th=[ 231], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 297], 00:09:50.743 | 30.00th=[ 314], 40.00th=[ 330], 50.00th=[ 355], 60.00th=[ 375], 00:09:50.743 | 70.00th=[ 396], 80.00th=[ 453], 90.00th=[ 494], 95.00th=[ 502], 00:09:50.743 | 99.00th=[ 611], 99.50th=[ 619], 99.90th=[41157], 99.95th=[41157], 00:09:50.743 | 99.99th=[41157] 00:09:50.743 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:50.743 slat (nsec): min=10732, max=42777, avg=11904.87, stdev=2027.96 00:09:50.743 clat (usec): min=119, max=926, avg=183.36, stdev=51.03 00:09:50.743 lat (usec): min=131, max=939, avg=195.27, stdev=51.25 00:09:50.743 clat percentiles (usec): 00:09:50.743 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:09:50.743 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 194], 00:09:50.743 | 70.00th=[ 210], 80.00th=[ 235], 90.00th=[ 249], 95.00th=[ 260], 00:09:50.743 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 404], 99.95th=[ 930], 00:09:50.743 | 99.99th=[ 930] 00:09:50.743 bw ( KiB/s): min= 8192, max= 8192, per=38.16%, avg=8192.00, stdev= 0.00, samples=1 00:09:50.743 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:50.743 lat (usec) : 250=50.19%, 500=47.27%, 750=2.33%, 1000=0.04% 00:09:50.743 lat (msec) : 50=0.18% 00:09:50.743 cpu : usr=3.40%, sys=3.60%, ctx=2836, majf=0, minf=1 00:09:50.743 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.743 issued rwts: total=1299,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.743 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.743 job1: (groupid=0, jobs=1): err= 0: pid=2339311: Wed Nov 27 07:52:44 2024 00:09:50.743 read: IOPS=23, BW=92.8KiB/s (95.1kB/s)(96.0KiB/1034msec) 00:09:50.743 slat (nsec): min=9779, max=35885, avg=21357.17, stdev=4841.85 00:09:50.743 clat (usec): min=566, max=41386, avg=39318.18, stdev=8255.06 00:09:50.743 lat (usec): min=589, max=41397, avg=39339.54, stdev=8254.68 00:09:50.743 clat percentiles (usec): 00:09:50.743 | 1.00th=[ 570], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:50.743 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:50.743 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:50.743 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:50.743 | 99.99th=[41157] 00:09:50.743 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:09:50.743 slat (nsec): min=9150, max=36199, avg=10064.62, stdev=1628.71 00:09:50.743 clat (usec): min=139, max=297, avg=163.26, stdev=15.34 00:09:50.743 lat (usec): min=149, max=334, avg=173.32, stdev=15.91 00:09:50.743 clat percentiles (usec): 00:09:50.743 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:09:50.743 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:09:50.743 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 188], 00:09:50.743 | 99.00th=[ 219], 99.50th=[ 227], 99.90th=[ 297], 99.95th=[ 297], 00:09:50.743 | 99.99th=[ 297] 00:09:50.743 bw ( KiB/s): min= 4096, max= 4096, per=19.08%, avg=4096.00, stdev= 0.00, samples=1 00:09:50.743 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:50.743 lat (usec) : 250=95.15%, 500=0.37%, 750=0.19% 00:09:50.743 lat (msec) : 50=4.29% 00:09:50.743 cpu : usr=0.29%, sys=0.48%, ctx=536, majf=0, minf=2 00:09:50.743 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.743 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.743 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.743 job2: (groupid=0, jobs=1): err= 0: pid=2339312: Wed Nov 27 07:52:44 2024 00:09:50.743 read: IOPS=1352, BW=5411KiB/s (5540kB/s)(5416KiB/1001msec) 00:09:50.743 slat (nsec): min=7294, max=39443, avg=8555.61, stdev=1716.84 00:09:50.743 clat (usec): min=206, max=41126, avg=479.90, stdev=2201.80 00:09:50.743 lat (usec): min=215, max=41135, avg=488.45, stdev=2201.80 00:09:50.743 clat percentiles (usec): 00:09:50.743 | 1.00th=[ 227], 5.00th=[ 241], 10.00th=[ 253], 20.00th=[ 281], 00:09:50.743 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 338], 60.00th=[ 363], 00:09:50.743 | 70.00th=[ 388], 80.00th=[ 453], 90.00th=[ 498], 95.00th=[ 537], 00:09:50.743 | 99.00th=[ 611], 99.50th=[ 660], 99.90th=[41157], 99.95th=[41157], 00:09:50.743 | 99.99th=[41157] 00:09:50.743 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:50.743 slat (nsec): min=10207, max=40928, avg=11381.08, stdev=1927.66 00:09:50.743 clat (usec): min=125, max=386, avg=203.60, stdev=42.36 00:09:50.743 lat (usec): min=136, max=426, avg=214.98, stdev=42.51 00:09:50.743 clat percentiles (usec): 00:09:50.743 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 153], 00:09:50.743 | 30.00th=[ 163], 40.00th=[ 196], 50.00th=[ 215], 60.00th=[ 235], 00:09:50.743 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 255], 00:09:50.743 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 371], 99.95th=[ 388], 00:09:50.743 | 99.99th=[ 388] 00:09:50.743 bw ( KiB/s): min= 7656, max= 7656, per=35.66%, avg=7656.00, stdev= 0.00, samples=1 00:09:50.743 iops : min= 1914, max= 1914, avg=1914.00, stdev= 0.00, samples=1 00:09:50.743 lat (usec) : 250=52.66%, 500=43.25%, 750=3.94% 00:09:50.743 lat (msec) : 50=0.14% 00:09:50.743 cpu : usr=2.70%, sys=4.30%, ctx=2890, majf=0, minf=2 00:09:50.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.744 issued rwts: total=1354,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.744 job3: (groupid=0, jobs=1): err= 0: pid=2339314: Wed Nov 27 07:52:44 2024 00:09:50.744 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:50.744 slat (nsec): min=6883, max=23796, avg=7873.35, stdev=1374.43 00:09:50.744 clat (usec): min=203, max=41318, avg=392.60, stdev=1804.45 00:09:50.744 lat (usec): min=211, max=41326, avg=400.48, stdev=1804.45 00:09:50.744 clat percentiles (usec): 00:09:50.744 | 1.00th=[ 223], 5.00th=[ 237], 10.00th=[ 249], 20.00th=[ 262], 00:09:50.744 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 310], 00:09:50.744 | 70.00th=[ 322], 80.00th=[ 347], 90.00th=[ 433], 95.00th=[ 465], 00:09:50.744 | 99.00th=[ 506], 99.50th=[ 635], 99.90th=[41157], 99.95th=[41157], 00:09:50.744 | 99.99th=[41157] 00:09:50.744 write: IOPS=1964, BW=7856KiB/s (8045kB/s)(7864KiB/1001msec); 0 zone resets 00:09:50.744 slat (nsec): min=9766, max=73994, avg=10779.53, stdev=2205.48 00:09:50.744 clat (usec): min=123, max=415, avg=181.35, stdev=40.28 00:09:50.744 lat (usec): min=133, max=449, avg=192.13, stdev=40.58 00:09:50.744 clat percentiles (usec): 00:09:50.744 | 1.00th=[ 131], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 147], 00:09:50.744 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 167], 60.00th=[ 188], 00:09:50.744 | 70.00th=[ 200], 80.00th=[ 215], 90.00th=[ 243], 95.00th=[ 255], 00:09:50.744 | 99.00th=[ 281], 99.50th=[ 310], 99.90th=[ 371], 99.95th=[ 416], 00:09:50.744 | 99.99th=[ 416] 00:09:50.744 bw ( KiB/s): min= 8192, max= 8192, per=38.16%, avg=8192.00, stdev= 0.00, samples=1 00:09:50.744 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:50.744 lat (usec) : 250=57.02%, 500=42.32%, 750=0.57% 00:09:50.744 lat (msec) : 50=0.09% 00:09:50.744 cpu : usr=1.20%, sys=3.80%, ctx=3504, majf=0, minf=1 00:09:50.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.744 issued rwts: total=1536,1966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.744 00:09:50.744 Run status group 0 (all jobs): 00:09:50.744 READ: bw=15.9MiB/s (16.7MB/s), 92.8KiB/s-6138KiB/s (95.1kB/s-6285kB/s), io=16.5MiB (17.3MB), run=1001-1034msec 00:09:50.744 WRITE: bw=21.0MiB/s (22.0MB/s), 1981KiB/s-7856KiB/s (2028kB/s-8045kB/s), io=21.7MiB (22.7MB), run=1001-1034msec 00:09:50.744 00:09:50.744 Disk stats (read/write): 00:09:50.744 nvme0n1: ios=1150/1536, merge=0/0, ticks=1492/261, in_queue=1753, util=98.40% 00:09:50.744 nvme0n2: ios=39/512, merge=0/0, ticks=753/80, in_queue=833, util=87.12% 00:09:50.744 nvme0n3: ios=1024/1440, merge=0/0, ticks=505/280, in_queue=785, util=88.98% 00:09:50.744 nvme0n4: ios=1479/1536, merge=0/0, ticks=1513/271, in_queue=1784, util=98.53% 00:09:50.744 07:52:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:50.744 [global] 00:09:50.744 thread=1 00:09:50.744 invalidate=1 00:09:50.744 rw=write 00:09:50.744 time_based=1 00:09:50.744 runtime=1 00:09:50.744 ioengine=libaio 00:09:50.744 direct=1 00:09:50.744 bs=4096 00:09:50.744 iodepth=128 00:09:50.744 norandommap=0 00:09:50.744 numjobs=1 00:09:50.744 00:09:50.744 verify_dump=1 00:09:50.744 verify_backlog=512 00:09:50.744 verify_state_save=0 00:09:50.744 do_verify=1 00:09:50.744 verify=crc32c-intel 00:09:50.744 [job0] 00:09:50.744 filename=/dev/nvme0n1 00:09:50.744 [job1] 00:09:50.744 filename=/dev/nvme0n2 00:09:50.744 [job2] 00:09:50.744 filename=/dev/nvme0n3 00:09:50.744 [job3] 00:09:50.744 filename=/dev/nvme0n4 00:09:50.744 Could not set queue depth (nvme0n1) 00:09:50.744 Could not set queue depth (nvme0n2) 00:09:50.744 Could not set queue depth (nvme0n3) 00:09:50.744 Could not set queue depth (nvme0n4) 00:09:51.002 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.002 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.002 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.002 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.002 fio-3.35 00:09:51.002 Starting 4 threads 00:09:52.381 00:09:52.381 job0: (groupid=0, jobs=1): err= 0: pid=2339691: Wed Nov 27 07:52:46 2024 00:09:52.381 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:09:52.381 slat (nsec): min=1047, max=15092k, avg=93789.96, stdev=709360.43 00:09:52.381 clat (usec): min=1180, max=32450, avg=13016.33, stdev=5445.40 00:09:52.381 lat (usec): min=1187, max=32486, avg=13110.12, stdev=5496.22 00:09:52.381 clat percentiles (usec): 00:09:52.381 | 1.00th=[ 2089], 5.00th=[ 4228], 10.00th=[ 7111], 20.00th=[ 9241], 00:09:52.381 | 30.00th=[10683], 40.00th=[11338], 50.00th=[12125], 60.00th=[12780], 00:09:52.381 | 70.00th=[13960], 80.00th=[17695], 90.00th=[21627], 95.00th=[23725], 00:09:52.381 | 99.00th=[27919], 99.50th=[30278], 99.90th=[31851], 99.95th=[31851], 00:09:52.381 | 99.99th=[32375] 00:09:52.381 write: IOPS=5159, BW=20.2MiB/s (21.1MB/s)(20.2MiB/1001msec); 0 zone resets 00:09:52.381 slat (nsec): min=1875, max=15872k, avg=86014.29, stdev=609841.62 00:09:52.381 clat (usec): min=484, max=32446, avg=11635.29, stdev=5391.17 00:09:52.381 lat (usec): min=689, max=32465, avg=11721.31, stdev=5436.08 00:09:52.381 clat percentiles (usec): 00:09:52.381 | 1.00th=[ 1975], 5.00th=[ 4293], 10.00th=[ 5866], 20.00th=[ 7767], 00:09:52.381 | 30.00th=[ 8356], 40.00th=[ 9372], 50.00th=[10421], 60.00th=[11469], 00:09:52.381 | 70.00th=[13042], 80.00th=[15270], 90.00th=[20841], 95.00th=[21627], 00:09:52.381 | 99.00th=[28443], 99.50th=[29230], 99.90th=[31851], 99.95th=[31851], 00:09:52.381 | 99.99th=[32375] 00:09:52.381 bw ( KiB/s): min=24576, max=24576, per=34.39%, avg=24576.00, stdev= 0.00, samples=1 00:09:52.381 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:09:52.381 lat (usec) : 500=0.01%, 750=0.07% 00:09:52.381 lat (msec) : 2=0.66%, 4=2.26%, 10=32.32%, 20=52.51%, 50=12.17% 00:09:52.381 cpu : usr=2.80%, sys=7.00%, ctx=365, majf=0, minf=1 00:09:52.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:52.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.381 issued rwts: total=5120,5165,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.381 job1: (groupid=0, jobs=1): err= 0: pid=2339692: Wed Nov 27 07:52:46 2024 00:09:52.381 read: IOPS=4675, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1005msec) 00:09:52.381 slat (nsec): min=1150, max=16426k, avg=99613.20, stdev=776787.82 00:09:52.381 clat (usec): min=2262, max=49789, avg=13104.05, stdev=5574.11 00:09:52.381 lat (usec): min=3377, max=51267, avg=13203.66, stdev=5627.33 00:09:52.381 clat percentiles (usec): 00:09:52.381 | 1.00th=[ 6718], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[ 9896], 00:09:52.381 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10945], 60.00th=[11863], 00:09:52.381 | 70.00th=[13435], 80.00th=[15139], 90.00th=[20841], 95.00th=[24773], 00:09:52.381 | 99.00th=[35914], 99.50th=[42206], 99.90th=[49546], 99.95th=[49546], 00:09:52.381 | 99.99th=[49546] 00:09:52.381 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:09:52.381 slat (nsec): min=1958, max=16855k, avg=87318.55, stdev=679014.08 00:09:52.381 clat (usec): min=657, max=60130, avg=12877.26, stdev=9966.50 00:09:52.381 lat (usec): min=667, max=60140, avg=12964.57, stdev=10045.95 00:09:52.381 clat percentiles (usec): 00:09:52.381 | 1.00th=[ 1631], 5.00th=[ 3654], 10.00th=[ 5342], 20.00th=[ 7111], 00:09:52.381 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10552], 00:09:52.381 | 70.00th=[11600], 80.00th=[15139], 90.00th=[24249], 95.00th=[39584], 00:09:52.381 | 99.00th=[49021], 99.50th=[54264], 99.90th=[60031], 99.95th=[60031], 00:09:52.381 | 99.99th=[60031] 00:09:52.381 bw ( KiB/s): min=16376, max=24288, per=28.45%, avg=20332.00, stdev=5594.63, samples=2 00:09:52.381 iops : min= 4094, max= 6072, avg=5083.00, stdev=1398.66, samples=2 00:09:52.381 lat (usec) : 750=0.03% 00:09:52.381 lat (msec) : 2=0.60%, 4=2.92%, 10=35.06%, 20=49.78%, 50=11.12% 00:09:52.381 lat (msec) : 100=0.48% 00:09:52.381 cpu : usr=3.59%, sys=5.98%, ctx=393, majf=0, minf=2 00:09:52.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:52.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.381 issued rwts: total=4699,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.381 job2: (groupid=0, jobs=1): err= 0: pid=2339693: Wed Nov 27 07:52:46 2024 00:09:52.381 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:09:52.381 slat (nsec): min=1425, max=20220k, avg=124602.08, stdev=811091.65 00:09:52.381 clat (usec): min=8026, max=51769, avg=15196.72, stdev=6510.14 00:09:52.381 lat (usec): min=8603, max=67188, avg=15321.32, stdev=6595.60 00:09:52.381 clat percentiles (usec): 00:09:52.381 | 1.00th=[ 8848], 5.00th=[10290], 10.00th=[11338], 20.00th=[11863], 00:09:52.381 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13698], 00:09:52.381 | 70.00th=[14746], 80.00th=[17433], 90.00th=[18744], 95.00th=[32637], 00:09:52.381 | 99.00th=[43779], 99.50th=[43779], 99.90th=[46924], 99.95th=[46924], 00:09:52.381 | 99.99th=[51643] 00:09:52.381 write: IOPS=3234, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1003msec); 0 zone resets 00:09:52.381 slat (usec): min=2, max=22803, avg=182.62, stdev=1040.06 00:09:52.381 clat (msec): min=2, max=101, avg=24.41, stdev=18.40 00:09:52.381 lat (msec): min=2, max=101, avg=24.59, stdev=18.52 00:09:52.381 clat percentiles (msec): 00:09:52.381 | 1.00th=[ 7], 5.00th=[ 10], 10.00th=[ 12], 20.00th=[ 12], 00:09:52.381 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 20], 60.00th=[ 22], 00:09:52.381 | 70.00th=[ 26], 80.00th=[ 37], 90.00th=[ 49], 95.00th=[ 57], 00:09:52.381 | 99.00th=[ 96], 99.50th=[ 100], 99.90th=[ 102], 99.95th=[ 102], 00:09:52.381 | 99.99th=[ 102] 00:09:52.381 bw ( KiB/s): min=10872, max=14064, per=17.45%, avg=12468.00, stdev=2257.08, samples=2 00:09:52.381 iops : min= 2718, max= 3516, avg=3117.00, stdev=564.27, samples=2 00:09:52.381 lat (msec) : 4=0.17%, 10=5.76%, 20=66.45%, 50=22.96%, 100=4.54% 00:09:52.381 lat (msec) : 250=0.11% 00:09:52.381 cpu : usr=1.30%, sys=4.59%, ctx=394, majf=0, minf=1 00:09:52.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:52.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.381 issued rwts: total=3072,3244,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.381 job3: (groupid=0, jobs=1): err= 0: pid=2339694: Wed Nov 27 07:52:46 2024 00:09:52.381 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:09:52.381 slat (nsec): min=1089, max=12597k, avg=114345.77, stdev=766267.28 00:09:52.381 clat (usec): min=1976, max=41442, avg=14675.54, stdev=6139.57 00:09:52.381 lat (usec): min=1996, max=41455, avg=14789.89, stdev=6185.51 00:09:52.381 clat percentiles (usec): 00:09:52.381 | 1.00th=[ 3982], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10683], 00:09:52.381 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12780], 60.00th=[13566], 00:09:52.381 | 70.00th=[14877], 80.00th=[16188], 90.00th=[25035], 95.00th=[28443], 00:09:52.382 | 99.00th=[34341], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:52.382 | 99.99th=[41681] 00:09:52.382 write: IOPS=4422, BW=17.3MiB/s (18.1MB/s)(17.3MiB/1001msec); 0 zone resets 00:09:52.382 slat (nsec): min=1934, max=11480k, avg=113186.81, stdev=606839.76 00:09:52.382 clat (usec): min=389, max=46617, avg=15123.07, stdev=7847.80 00:09:52.382 lat (usec): min=1651, max=46620, avg=15236.25, stdev=7901.59 00:09:52.382 clat percentiles (usec): 00:09:52.382 | 1.00th=[ 3654], 5.00th=[ 6980], 10.00th=[ 8848], 20.00th=[10290], 00:09:52.382 | 30.00th=[11076], 40.00th=[11469], 50.00th=[12125], 60.00th=[13698], 00:09:52.382 | 70.00th=[16188], 80.00th=[19268], 90.00th=[26870], 95.00th=[33162], 00:09:52.382 | 99.00th=[41681], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:09:52.382 | 99.99th=[46400] 00:09:52.382 bw ( KiB/s): min=16664, max=16664, per=23.32%, avg=16664.00, stdev= 0.00, samples=1 00:09:52.382 iops : min= 4166, max= 4166, avg=4166.00, stdev= 0.00, samples=1 00:09:52.382 lat (usec) : 500=0.01% 00:09:52.382 lat (msec) : 2=0.40%, 4=1.13%, 10=14.43%, 20=67.12%, 50=16.91% 00:09:52.382 cpu : usr=2.30%, sys=4.30%, ctx=466, majf=0, minf=1 00:09:52.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:52.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.382 issued rwts: total=4096,4427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.382 00:09:52.382 Run status group 0 (all jobs): 00:09:52.382 READ: bw=66.0MiB/s (69.2MB/s), 12.0MiB/s-20.0MiB/s (12.5MB/s-20.9MB/s), io=66.4MiB (69.6MB), run=1001-1005msec 00:09:52.382 WRITE: bw=69.8MiB/s (73.2MB/s), 12.6MiB/s-20.2MiB/s (13.2MB/s-21.1MB/s), io=70.1MiB (73.5MB), run=1001-1005msec 00:09:52.382 00:09:52.382 Disk stats (read/write): 00:09:52.382 nvme0n1: ios=4454/4608, merge=0/0, ticks=45826/39286, in_queue=85112, util=97.60% 00:09:52.382 nvme0n2: ios=3825/4096, merge=0/0, ticks=37828/38990, in_queue=76818, util=86.99% 00:09:52.382 nvme0n3: ios=2206/2560, merge=0/0, ticks=15612/30682, in_queue=46294, util=96.35% 00:09:52.382 nvme0n4: ios=3584/3942, merge=0/0, ticks=31812/43354, in_queue=75166, util=89.60% 00:09:52.382 07:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:52.382 [global] 00:09:52.382 thread=1 00:09:52.382 invalidate=1 00:09:52.382 rw=randwrite 00:09:52.382 time_based=1 00:09:52.382 runtime=1 00:09:52.382 ioengine=libaio 00:09:52.382 direct=1 00:09:52.382 bs=4096 00:09:52.382 iodepth=128 00:09:52.382 norandommap=0 00:09:52.382 numjobs=1 00:09:52.382 00:09:52.382 verify_dump=1 00:09:52.382 verify_backlog=512 00:09:52.382 verify_state_save=0 00:09:52.382 do_verify=1 00:09:52.382 verify=crc32c-intel 00:09:52.382 [job0] 00:09:52.382 filename=/dev/nvme0n1 00:09:52.382 [job1] 00:09:52.382 filename=/dev/nvme0n2 00:09:52.382 [job2] 00:09:52.382 filename=/dev/nvme0n3 00:09:52.382 [job3] 00:09:52.382 filename=/dev/nvme0n4 00:09:52.382 Could not set queue depth (nvme0n1) 00:09:52.382 Could not set queue depth (nvme0n2) 00:09:52.382 Could not set queue depth (nvme0n3) 00:09:52.382 Could not set queue depth (nvme0n4) 00:09:52.382 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.382 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.382 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.382 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.382 fio-3.35 00:09:52.382 Starting 4 threads 00:09:53.889 00:09:53.889 job0: (groupid=0, jobs=1): err= 0: pid=2340060: Wed Nov 27 07:52:47 2024 00:09:53.889 read: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec) 00:09:53.889 slat (nsec): min=1582, max=36898k, avg=212174.62, stdev=1763592.84 00:09:53.889 clat (usec): min=9624, max=95157, avg=23871.94, stdev=16626.42 00:09:53.889 lat (usec): min=9633, max=95175, avg=24084.12, stdev=16797.37 00:09:53.889 clat percentiles (usec): 00:09:53.889 | 1.00th=[10945], 5.00th=[12256], 10.00th=[12518], 20.00th=[13173], 00:09:53.889 | 30.00th=[13566], 40.00th=[14353], 50.00th=[14615], 60.00th=[16909], 00:09:53.889 | 70.00th=[22676], 80.00th=[35914], 90.00th=[53740], 95.00th=[58459], 00:09:53.889 | 99.00th=[67634], 99.50th=[67634], 99.90th=[79168], 99.95th=[94897], 00:09:53.889 | 99.99th=[94897] 00:09:53.889 write: IOPS=1913, BW=7654KiB/s (7838kB/s)(7708KiB/1007msec); 0 zone resets 00:09:53.889 slat (usec): min=2, max=26182, avg=345.11, stdev=1707.40 00:09:53.889 clat (msec): min=4, max=145, avg=47.14, stdev=28.39 00:09:53.889 lat (msec): min=5, max=145, avg=47.48, stdev=28.54 00:09:53.889 clat percentiles (msec): 00:09:53.889 | 1.00th=[ 14], 5.00th=[ 21], 10.00th=[ 25], 20.00th=[ 29], 00:09:53.889 | 30.00th=[ 31], 40.00th=[ 33], 50.00th=[ 39], 60.00th=[ 44], 00:09:53.889 | 70.00th=[ 51], 80.00th=[ 58], 90.00th=[ 87], 95.00th=[ 125], 00:09:53.889 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:09:53.889 | 99.99th=[ 146] 00:09:53.889 bw ( KiB/s): min= 6200, max= 8192, per=10.95%, avg=7196.00, stdev=1408.56, samples=2 00:09:53.889 iops : min= 1550, max= 2048, avg=1799.00, stdev=352.14, samples=2 00:09:53.889 lat (msec) : 10=0.49%, 20=32.78%, 50=44.59%, 100=18.28%, 250=3.87% 00:09:53.889 cpu : usr=1.29%, sys=2.09%, ctx=213, majf=0, minf=1 00:09:53.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:09:53.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.889 issued rwts: total=1536,1927,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.889 job1: (groupid=0, jobs=1): err= 0: pid=2340061: Wed Nov 27 07:52:47 2024 00:09:53.889 read: IOPS=5011, BW=19.6MiB/s (20.5MB/s)(19.7MiB/1007msec) 00:09:53.889 slat (nsec): min=1048, max=30130k, avg=81941.55, stdev=754254.24 00:09:53.889 clat (usec): min=2171, max=57460, avg=11793.43, stdev=6467.62 00:09:53.889 lat (usec): min=2176, max=57600, avg=11875.37, stdev=6516.63 00:09:53.889 clat percentiles (usec): 00:09:53.889 | 1.00th=[ 3130], 5.00th=[ 3851], 10.00th=[ 6783], 20.00th=[ 8717], 00:09:53.889 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:09:53.889 | 70.00th=[11076], 80.00th=[14615], 90.00th=[18220], 95.00th=[27395], 00:09:53.889 | 99.00th=[42730], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:09:53.889 | 99.99th=[57410] 00:09:53.889 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:09:53.889 slat (nsec): min=1886, max=14710k, avg=92349.75, stdev=618567.41 00:09:53.889 clat (usec): min=271, max=59660, avg=13332.30, stdev=10884.51 00:09:53.889 lat (usec): min=282, max=59665, avg=13424.65, stdev=10963.44 00:09:53.889 clat percentiles (usec): 00:09:53.889 | 1.00th=[ 2180], 5.00th=[ 3458], 10.00th=[ 5145], 20.00th=[ 7570], 00:09:53.889 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[10159], 00:09:53.889 | 70.00th=[10421], 80.00th=[16188], 90.00th=[30016], 95.00th=[40109], 00:09:53.889 | 99.00th=[55313], 99.50th=[56886], 99.90th=[59507], 99.95th=[59507], 00:09:53.889 | 99.99th=[59507] 00:09:53.889 bw ( KiB/s): min=18832, max=22128, per=31.15%, avg=20480.00, stdev=2330.62, samples=2 00:09:53.889 iops : min= 4708, max= 5532, avg=5120.00, stdev=582.66, samples=2 00:09:53.889 lat (usec) : 500=0.02%, 750=0.05%, 1000=0.20% 00:09:53.889 lat (msec) : 2=0.20%, 4=5.93%, 10=43.88%, 20=37.70%, 50=11.08% 00:09:53.889 lat (msec) : 100=0.94% 00:09:53.889 cpu : usr=3.58%, sys=4.57%, ctx=456, majf=0, minf=2 00:09:53.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:53.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.890 issued rwts: total=5047,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.890 job2: (groupid=0, jobs=1): err= 0: pid=2340062: Wed Nov 27 07:52:47 2024 00:09:53.890 read: IOPS=2798, BW=10.9MiB/s (11.5MB/s)(11.0MiB/1005msec) 00:09:53.890 slat (nsec): min=1654, max=17685k, avg=165521.24, stdev=1107090.09 00:09:53.890 clat (usec): min=3367, max=87178, avg=18782.68, stdev=11396.29 00:09:53.890 lat (usec): min=6110, max=87188, avg=18948.20, stdev=11497.15 00:09:53.890 clat percentiles (usec): 00:09:53.890 | 1.00th=[ 8979], 5.00th=[11076], 10.00th=[11207], 20.00th=[12125], 00:09:53.890 | 30.00th=[14091], 40.00th=[15533], 50.00th=[16319], 60.00th=[17171], 00:09:53.890 | 70.00th=[17957], 80.00th=[20317], 90.00th=[25035], 95.00th=[37487], 00:09:53.890 | 99.00th=[82314], 99.50th=[84411], 99.90th=[87557], 99.95th=[87557], 00:09:53.890 | 99.99th=[87557] 00:09:53.890 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:09:53.890 slat (usec): min=2, max=13501, avg=167.25, stdev=895.69 00:09:53.890 clat (usec): min=3012, max=90295, avg=24240.19, stdev=16553.33 00:09:53.890 lat (usec): min=3022, max=90329, avg=24407.43, stdev=16645.60 00:09:53.890 clat percentiles (usec): 00:09:53.890 | 1.00th=[ 5866], 5.00th=[ 8979], 10.00th=[10945], 20.00th=[11469], 00:09:53.890 | 30.00th=[12256], 40.00th=[14222], 50.00th=[16450], 60.00th=[25560], 00:09:53.890 | 70.00th=[30802], 80.00th=[36963], 90.00th=[40633], 95.00th=[53740], 00:09:53.890 | 99.00th=[87557], 99.50th=[88605], 99.90th=[90702], 99.95th=[90702], 00:09:53.890 | 99.99th=[90702] 00:09:53.890 bw ( KiB/s): min=11600, max=12976, per=18.69%, avg=12288.00, stdev=972.98, samples=2 00:09:53.890 iops : min= 2900, max= 3244, avg=3072.00, stdev=243.24, samples=2 00:09:53.890 lat (msec) : 4=0.12%, 10=4.83%, 20=59.26%, 50=31.34%, 100=4.45% 00:09:53.890 cpu : usr=2.99%, sys=3.88%, ctx=262, majf=0, minf=1 00:09:53.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:53.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.890 issued rwts: total=2812,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.890 job3: (groupid=0, jobs=1): err= 0: pid=2340063: Wed Nov 27 07:52:47 2024 00:09:53.890 read: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec) 00:09:53.890 slat (nsec): min=1424, max=5616.2k, avg=79752.44, stdev=471909.21 00:09:53.890 clat (usec): min=5293, max=16954, avg=10033.25, stdev=1691.74 00:09:53.890 lat (usec): min=5758, max=16966, avg=10113.00, stdev=1729.20 00:09:53.890 clat percentiles (usec): 00:09:53.890 | 1.00th=[ 6325], 5.00th=[ 7177], 10.00th=[ 8291], 20.00th=[ 8979], 00:09:53.890 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[10290], 00:09:53.890 | 70.00th=[10945], 80.00th=[11338], 90.00th=[12256], 95.00th=[13173], 00:09:53.890 | 99.00th=[14222], 99.50th=[14877], 99.90th=[16188], 99.95th=[16581], 00:09:53.890 | 99.99th=[16909] 00:09:53.890 write: IOPS=6393, BW=25.0MiB/s (26.2MB/s)(25.1MiB/1006msec); 0 zone resets 00:09:53.890 slat (usec): min=2, max=5337, avg=73.67, stdev=365.21 00:09:53.890 clat (usec): min=5206, max=17228, avg=10181.04, stdev=1598.76 00:09:53.890 lat (usec): min=5341, max=17232, avg=10254.71, stdev=1623.18 00:09:53.890 clat percentiles (usec): 00:09:53.890 | 1.00th=[ 6063], 5.00th=[ 7898], 10.00th=[ 8717], 20.00th=[ 9110], 00:09:53.890 | 30.00th=[ 9372], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[10552], 00:09:53.890 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12387], 00:09:53.890 | 99.00th=[15139], 99.50th=[15926], 99.90th=[16581], 99.95th=[17171], 00:09:53.890 | 99.99th=[17171] 00:09:53.890 bw ( KiB/s): min=24576, max=25864, per=38.36%, avg=25220.00, stdev=910.75, samples=2 00:09:53.890 iops : min= 6144, max= 6466, avg=6305.00, stdev=227.69, samples=2 00:09:53.890 lat (msec) : 10=56.34%, 20=43.66% 00:09:53.890 cpu : usr=4.38%, sys=7.16%, ctx=698, majf=0, minf=1 00:09:53.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:53.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.890 issued rwts: total=6144,6432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.890 00:09:53.890 Run status group 0 (all jobs): 00:09:53.890 READ: bw=60.3MiB/s (63.2MB/s), 6101KiB/s-23.9MiB/s (6248kB/s-25.0MB/s), io=60.7MiB (63.6MB), run=1005-1007msec 00:09:53.890 WRITE: bw=64.2MiB/s (67.3MB/s), 7654KiB/s-25.0MiB/s (7838kB/s-26.2MB/s), io=64.7MiB (67.8MB), run=1005-1007msec 00:09:53.890 00:09:53.890 Disk stats (read/write): 00:09:53.890 nvme0n1: ios=1074/1535, merge=0/0, ticks=14544/35109, in_queue=49653, util=82.26% 00:09:53.890 nvme0n2: ios=4096/4119, merge=0/0, ticks=48832/50120, in_queue=98952, util=83.26% 00:09:53.890 nvme0n3: ios=2588/2599, merge=0/0, ticks=43209/56402, in_queue=99611, util=96.22% 00:09:53.890 nvme0n4: ios=4911/5120, merge=0/0, ticks=25669/24258, in_queue=49927, util=97.80% 00:09:53.890 07:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:53.890 07:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2340300 00:09:53.890 07:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:53.890 07:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:53.890 [global] 00:09:53.890 thread=1 00:09:53.890 invalidate=1 00:09:53.890 rw=read 00:09:53.890 time_based=1 00:09:53.890 runtime=10 00:09:53.890 ioengine=libaio 00:09:53.890 direct=1 00:09:53.890 bs=4096 00:09:53.890 iodepth=1 00:09:53.890 norandommap=1 00:09:53.890 numjobs=1 00:09:53.890 00:09:53.890 [job0] 00:09:53.890 filename=/dev/nvme0n1 00:09:53.890 [job1] 00:09:53.890 filename=/dev/nvme0n2 00:09:53.890 [job2] 00:09:53.890 filename=/dev/nvme0n3 00:09:53.890 [job3] 00:09:53.890 filename=/dev/nvme0n4 00:09:53.890 Could not set queue depth (nvme0n1) 00:09:53.890 Could not set queue depth (nvme0n2) 00:09:53.890 Could not set queue depth (nvme0n3) 00:09:53.890 Could not set queue depth (nvme0n4) 00:09:54.148 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.148 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.148 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.148 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.148 fio-3.35 00:09:54.148 Starting 4 threads 00:09:56.674 07:52:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:56.932 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=12320768, buflen=4096 00:09:56.932 fio: pid=2340447, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:56.932 07:52:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:57.190 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=18460672, buflen=4096 00:09:57.190 fio: pid=2340446, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:57.191 07:52:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:57.191 07:52:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:57.450 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=53194752, buflen=4096 00:09:57.450 fio: pid=2340444, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:57.450 07:52:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:57.450 07:52:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:57.450 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=10919936, buflen=4096 00:09:57.450 fio: pid=2340445, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:57.709 07:52:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:57.709 07:52:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:57.709 00:09:57.709 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2340444: Wed Nov 27 07:52:51 2024 00:09:57.709 read: IOPS=4124, BW=16.1MiB/s (16.9MB/s)(50.7MiB/3149msec) 00:09:57.709 slat (usec): min=6, max=16792, avg= 8.88, stdev=147.67 00:09:57.709 clat (usec): min=154, max=21335, avg=230.82, stdev=191.79 00:09:57.709 lat (usec): min=161, max=21343, avg=239.70, stdev=242.55 00:09:57.709 clat percentiles (usec): 00:09:57.709 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:09:57.709 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 235], 00:09:57.709 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 269], 00:09:57.709 | 99.00th=[ 289], 99.50th=[ 375], 99.90th=[ 445], 99.95th=[ 523], 00:09:57.709 | 99.99th=[ 3720] 00:09:57.709 bw ( KiB/s): min=15328, max=18496, per=60.62%, avg=16636.67, stdev=1121.46, samples=6 00:09:57.709 iops : min= 3832, max= 4624, avg=4159.17, stdev=280.36, samples=6 00:09:57.710 lat (usec) : 250=78.75%, 500=21.18%, 750=0.02% 00:09:57.710 lat (msec) : 2=0.02%, 4=0.02%, 50=0.01% 00:09:57.710 cpu : usr=0.86%, sys=4.00%, ctx=12990, majf=0, minf=1 00:09:57.710 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.710 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.710 issued rwts: total=12988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.710 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.710 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2340445: Wed Nov 27 07:52:51 2024 00:09:57.710 read: IOPS=789, BW=3158KiB/s (3234kB/s)(10.4MiB/3377msec) 00:09:57.710 slat (usec): min=6, max=11710, avg=16.55, stdev=320.17 00:09:57.710 clat (usec): min=179, max=42153, avg=1240.56, stdev=6267.39 00:09:57.710 lat (usec): min=186, max=52812, avg=1252.73, stdev=6300.89 00:09:57.710 clat percentiles (usec): 00:09:57.710 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 239], 00:09:57.710 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 258], 00:09:57.710 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 359], 00:09:57.710 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:57.710 | 99.99th=[42206] 00:09:57.710 bw ( KiB/s): min= 96, max=15256, per=12.90%, avg=3541.83, stdev=5956.48, samples=6 00:09:57.710 iops : min= 24, max= 3814, avg=885.33, stdev=1489.21, samples=6 00:09:57.710 lat (usec) : 250=45.74%, 500=51.74%, 750=0.07% 00:09:57.710 lat (msec) : 50=2.40% 00:09:57.710 cpu : usr=0.30%, sys=0.65%, ctx=2669, majf=0, minf=2 00:09:57.710 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.710 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.710 issued rwts: total=2667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.710 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.710 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2340446: Wed Nov 27 07:52:51 2024 00:09:57.710 read: IOPS=1533, BW=6132KiB/s (6279kB/s)(17.6MiB/2940msec) 00:09:57.710 slat (nsec): min=6633, max=61039, avg=7951.63, stdev=2410.18 00:09:57.710 clat (usec): min=183, max=42087, avg=638.35, stdev=4027.57 00:09:57.710 lat (usec): min=191, max=42111, avg=646.30, stdev=4029.16 00:09:57.710 clat percentiles (usec): 00:09:57.710 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 219], 00:09:57.710 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:09:57.710 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 281], 00:09:57.710 | 99.00th=[ 1467], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:57.710 | 99.99th=[42206] 00:09:57.710 bw ( KiB/s): min= 96, max=16256, per=16.26%, avg=4462.40, stdev=7033.83, samples=5 00:09:57.710 iops : min= 24, max= 4064, avg=1115.60, stdev=1758.46, samples=5 00:09:57.710 lat (usec) : 250=72.89%, 500=26.06% 00:09:57.710 lat (msec) : 2=0.02%, 4=0.02%, 50=0.98% 00:09:57.710 cpu : usr=0.44%, sys=1.43%, ctx=4511, majf=0, minf=2 00:09:57.710 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.710 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.710 issued rwts: total=4508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.710 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.710 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2340447: Wed Nov 27 07:52:51 2024 00:09:57.710 read: IOPS=1103, BW=4412KiB/s (4518kB/s)(11.8MiB/2727msec) 00:09:57.710 slat (nsec): min=6476, max=53780, avg=7550.86, stdev=1900.31 00:09:57.710 clat (usec): min=213, max=42992, avg=890.60, stdev=5060.20 00:09:57.710 lat (usec): min=220, max=43010, avg=898.15, stdev=5061.52 00:09:57.710 clat percentiles (usec): 00:09:57.710 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:09:57.710 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:09:57.710 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 289], 00:09:57.710 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:09:57.710 | 99.99th=[43254] 00:09:57.710 bw ( KiB/s): min= 96, max=15392, per=13.10%, avg=3595.20, stdev=6662.40, samples=5 00:09:57.710 iops : min= 24, max= 3848, avg=898.80, stdev=1665.60, samples=5 00:09:57.710 lat (usec) : 250=47.42%, 500=50.95%, 750=0.03% 00:09:57.710 lat (msec) : 50=1.56% 00:09:57.710 cpu : usr=0.26%, sys=1.03%, ctx=3009, majf=0, minf=2 00:09:57.710 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.710 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.710 issued rwts: total=3009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.710 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.710 00:09:57.710 Run status group 0 (all jobs): 00:09:57.710 READ: bw=26.8MiB/s (28.1MB/s), 3158KiB/s-16.1MiB/s (3234kB/s-16.9MB/s), io=90.5MiB (94.9MB), run=2727-3377msec 00:09:57.710 00:09:57.710 Disk stats (read/write): 00:09:57.710 nvme0n1: ios=12880/0, merge=0/0, ticks=2904/0, in_queue=2904, util=95.16% 00:09:57.710 nvme0n2: ios=2666/0, merge=0/0, ticks=3299/0, in_queue=3299, util=95.72% 00:09:57.710 nvme0n3: ios=4282/0, merge=0/0, ticks=3669/0, in_queue=3669, util=100.00% 00:09:57.710 nvme0n4: ios=2614/0, merge=0/0, ticks=2558/0, in_queue=2558, util=96.45% 00:09:57.710 07:52:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:57.710 07:52:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:57.969 07:52:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:57.969 07:52:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:58.228 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:58.228 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:58.487 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:58.487 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:58.487 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:58.487 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2340300 00:09:58.487 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:58.487 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:58.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.746 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:58.746 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:58.746 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:58.746 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.746 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:58.746 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.746 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:58.746 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:58.746 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:58.746 nvmf hotplug test: fio failed as expected 00:09:58.746 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:59.005 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:59.005 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:59.005 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:59.005 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:59.005 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:59.005 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:59.005 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:59.005 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.005 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:59.005 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.005 07:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.005 rmmod nvme_tcp 00:09:59.005 rmmod nvme_fabrics 00:09:59.005 rmmod nvme_keyring 00:09:59.005 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.005 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:59.005 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:59.005 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2337502 ']' 00:09:59.005 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2337502 00:09:59.005 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2337502 ']' 00:09:59.005 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2337502 00:09:59.005 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:59.005 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.005 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2337502 00:09:59.005 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.005 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.005 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2337502' 00:09:59.005 killing process with pid 2337502 00:09:59.005 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2337502 00:09:59.005 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2337502 00:09:59.265 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:59.265 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:59.265 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:59.265 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:59.265 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:59.265 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:59.265 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:59.265 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.265 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:59.265 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.265 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.265 07:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:01.801 00:10:01.801 real 0m25.963s 00:10:01.801 user 1m45.126s 00:10:01.801 sys 0m7.795s 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.801 ************************************ 00:10:01.801 END TEST nvmf_fio_target 00:10:01.801 ************************************ 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.801 ************************************ 00:10:01.801 START TEST nvmf_bdevio 00:10:01.801 ************************************ 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:01.801 * Looking for test storage... 00:10:01.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:01.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.801 --rc genhtml_branch_coverage=1 00:10:01.801 --rc genhtml_function_coverage=1 00:10:01.801 --rc genhtml_legend=1 00:10:01.801 --rc geninfo_all_blocks=1 00:10:01.801 --rc geninfo_unexecuted_blocks=1 00:10:01.801 00:10:01.801 ' 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:01.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.801 --rc genhtml_branch_coverage=1 00:10:01.801 --rc genhtml_function_coverage=1 00:10:01.801 --rc genhtml_legend=1 00:10:01.801 --rc geninfo_all_blocks=1 00:10:01.801 --rc geninfo_unexecuted_blocks=1 00:10:01.801 00:10:01.801 ' 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:01.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.801 --rc genhtml_branch_coverage=1 00:10:01.801 --rc genhtml_function_coverage=1 00:10:01.801 --rc genhtml_legend=1 00:10:01.801 --rc geninfo_all_blocks=1 00:10:01.801 --rc geninfo_unexecuted_blocks=1 00:10:01.801 00:10:01.801 ' 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:01.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.801 --rc genhtml_branch_coverage=1 00:10:01.801 --rc genhtml_function_coverage=1 00:10:01.801 --rc genhtml_legend=1 00:10:01.801 --rc geninfo_all_blocks=1 00:10:01.801 --rc geninfo_unexecuted_blocks=1 00:10:01.801 00:10:01.801 ' 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.801 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:01.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:01.802 07:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:07.070 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:07.071 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:07.071 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:07.071 Found net devices under 0000:86:00.0: cvl_0_0 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:07.071 Found net devices under 0000:86:00.1: cvl_0_1 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:07.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:10:07.071 00:10:07.071 --- 10.0.0.2 ping statistics --- 00:10:07.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.071 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:10:07.071 00:10:07.071 --- 10.0.0.1 ping statistics --- 00:10:07.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.071 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2344692 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2344692 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2344692 ']' 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.071 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.071 [2024-11-27 07:53:00.690237] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:10:07.071 [2024-11-27 07:53:00.690286] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.072 [2024-11-27 07:53:00.758435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:07.072 [2024-11-27 07:53:00.800488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.072 [2024-11-27 07:53:00.800527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.072 [2024-11-27 07:53:00.800534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.072 [2024-11-27 07:53:00.800540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.072 [2024-11-27 07:53:00.800545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.072 [2024-11-27 07:53:00.802207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:07.072 [2024-11-27 07:53:00.802315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:07.072 [2024-11-27 07:53:00.802428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.072 [2024-11-27 07:53:00.802429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.072 [2024-11-27 07:53:00.940274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.072 Malloc0 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.072 07:53:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.072 [2024-11-27 07:53:00.997175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.072 07:53:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.072 07:53:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:07.072 07:53:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:07.072 07:53:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:07.072 07:53:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:07.072 07:53:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:07.072 07:53:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:07.072 { 00:10:07.072 "params": { 00:10:07.072 "name": "Nvme$subsystem", 00:10:07.072 "trtype": "$TEST_TRANSPORT", 00:10:07.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:07.072 "adrfam": "ipv4", 00:10:07.072 "trsvcid": "$NVMF_PORT", 00:10:07.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:07.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:07.072 "hdgst": ${hdgst:-false}, 00:10:07.072 "ddgst": ${ddgst:-false} 00:10:07.072 }, 00:10:07.072 "method": "bdev_nvme_attach_controller" 00:10:07.072 } 00:10:07.072 EOF 00:10:07.072 )") 00:10:07.072 07:53:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:07.072 07:53:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:07.072 07:53:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:07.072 07:53:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:07.072 "params": { 00:10:07.072 "name": "Nvme1", 00:10:07.072 "trtype": "tcp", 00:10:07.072 "traddr": "10.0.0.2", 00:10:07.072 "adrfam": "ipv4", 00:10:07.072 "trsvcid": "4420", 00:10:07.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:07.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:07.072 "hdgst": false, 00:10:07.072 "ddgst": false 00:10:07.072 }, 00:10:07.072 "method": "bdev_nvme_attach_controller" 00:10:07.072 }' 00:10:07.072 [2024-11-27 07:53:01.050280] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:10:07.072 [2024-11-27 07:53:01.050323] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2344793 ] 00:10:07.072 [2024-11-27 07:53:01.114100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:07.072 [2024-11-27 07:53:01.158198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.072 [2024-11-27 07:53:01.158294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.072 [2024-11-27 07:53:01.158296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.637 I/O targets: 00:10:07.637 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:07.637 00:10:07.637 00:10:07.637 CUnit - A unit testing framework for C - Version 2.1-3 00:10:07.637 http://cunit.sourceforge.net/ 00:10:07.637 00:10:07.637 00:10:07.637 Suite: bdevio tests on: Nvme1n1 00:10:07.637 Test: blockdev write read block ...passed 00:10:07.637 Test: blockdev write zeroes read block ...passed 00:10:07.637 Test: blockdev write zeroes read no split ...passed 00:10:07.637 Test: blockdev write zeroes read split ...passed 00:10:07.637 Test: blockdev write zeroes read split partial ...passed 00:10:07.637 Test: blockdev reset ...[2024-11-27 07:53:01.592703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:07.637 [2024-11-27 07:53:01.592770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1146350 (9): Bad file descriptor 00:10:07.637 [2024-11-27 07:53:01.689424] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:07.637 passed 00:10:07.637 Test: blockdev write read 8 blocks ...passed 00:10:07.637 Test: blockdev write read size > 128k ...passed 00:10:07.637 Test: blockdev write read invalid size ...passed 00:10:07.638 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:07.638 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:07.638 Test: blockdev write read max offset ...passed 00:10:07.896 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:07.896 Test: blockdev writev readv 8 blocks ...passed 00:10:07.896 Test: blockdev writev readv 30 x 1block ...passed 00:10:07.896 Test: blockdev writev readv block ...passed 00:10:07.896 Test: blockdev writev readv size > 128k ...passed 00:10:07.896 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:07.896 Test: blockdev comparev and writev ...[2024-11-27 07:53:01.904649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.896 [2024-11-27 07:53:01.904676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:07.896 [2024-11-27 07:53:01.904690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.896 [2024-11-27 07:53:01.904698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:07.896 [2024-11-27 07:53:01.904943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.896 [2024-11-27 07:53:01.904958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:07.896 [2024-11-27 07:53:01.904970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.896 [2024-11-27 07:53:01.904977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:07.896 [2024-11-27 07:53:01.905224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.896 [2024-11-27 07:53:01.905235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:07.896 [2024-11-27 07:53:01.905248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.896 [2024-11-27 07:53:01.905256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:07.896 [2024-11-27 07:53:01.905483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.896 [2024-11-27 07:53:01.905493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:07.896 [2024-11-27 07:53:01.905506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.896 [2024-11-27 07:53:01.905514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:07.896 passed 00:10:07.896 Test: blockdev nvme passthru rw ...passed 00:10:07.896 Test: blockdev nvme passthru vendor specific ...[2024-11-27 07:53:01.988242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:07.896 [2024-11-27 07:53:01.988259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:07.896 [2024-11-27 07:53:01.988369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:07.896 [2024-11-27 07:53:01.988379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:07.896 [2024-11-27 07:53:01.988493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:07.896 [2024-11-27 07:53:01.988502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:07.896 [2024-11-27 07:53:01.988614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:07.896 [2024-11-27 07:53:01.988628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:07.896 passed 00:10:08.154 Test: blockdev nvme admin passthru ...passed 00:10:08.154 Test: blockdev copy ...passed 00:10:08.154 00:10:08.154 Run Summary: Type Total Ran Passed Failed Inactive 00:10:08.154 suites 1 1 n/a 0 0 00:10:08.154 tests 23 23 23 0 0 00:10:08.154 asserts 152 152 152 0 n/a 00:10:08.154 00:10:08.154 Elapsed time = 1.207 seconds 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.154 rmmod nvme_tcp 00:10:08.154 rmmod nvme_fabrics 00:10:08.154 rmmod nvme_keyring 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2344692 ']' 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2344692 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2344692 ']' 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2344692 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:08.154 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.412 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2344692 00:10:08.412 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:08.412 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:08.412 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2344692' 00:10:08.412 killing process with pid 2344692 00:10:08.412 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2344692 00:10:08.412 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2344692 00:10:08.412 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:08.412 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:08.412 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:08.412 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:08.412 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:08.412 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:08.412 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:08.412 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.412 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.412 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.412 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.412 07:53:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.940 07:53:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.940 00:10:10.940 real 0m9.179s 00:10:10.940 user 0m10.496s 00:10:10.940 sys 0m4.302s 00:10:10.940 07:53:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.940 07:53:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.940 ************************************ 00:10:10.940 END TEST nvmf_bdevio 00:10:10.940 ************************************ 00:10:10.940 07:53:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:10.940 00:10:10.940 real 4m26.292s 00:10:10.940 user 10m9.466s 00:10:10.940 sys 1m31.959s 00:10:10.940 07:53:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.940 07:53:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.940 ************************************ 00:10:10.940 END TEST nvmf_target_core 00:10:10.940 ************************************ 00:10:10.940 07:53:04 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:10.940 07:53:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:10.940 07:53:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.940 07:53:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:10.940 ************************************ 00:10:10.940 START TEST nvmf_target_extra 00:10:10.940 ************************************ 00:10:10.940 07:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:10.940 * Looking for test storage... 00:10:10.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:10.940 07:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:10.940 07:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:10.940 07:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:10.940 07:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:10.940 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:10.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.941 --rc genhtml_branch_coverage=1 00:10:10.941 --rc genhtml_function_coverage=1 00:10:10.941 --rc genhtml_legend=1 00:10:10.941 --rc geninfo_all_blocks=1 00:10:10.941 --rc geninfo_unexecuted_blocks=1 00:10:10.941 00:10:10.941 ' 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:10.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.941 --rc genhtml_branch_coverage=1 00:10:10.941 --rc genhtml_function_coverage=1 00:10:10.941 --rc genhtml_legend=1 00:10:10.941 --rc geninfo_all_blocks=1 00:10:10.941 --rc geninfo_unexecuted_blocks=1 00:10:10.941 00:10:10.941 ' 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:10.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.941 --rc genhtml_branch_coverage=1 00:10:10.941 --rc genhtml_function_coverage=1 00:10:10.941 --rc genhtml_legend=1 00:10:10.941 --rc geninfo_all_blocks=1 00:10:10.941 --rc geninfo_unexecuted_blocks=1 00:10:10.941 00:10:10.941 ' 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:10.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.941 --rc genhtml_branch_coverage=1 00:10:10.941 --rc genhtml_function_coverage=1 00:10:10.941 --rc genhtml_legend=1 00:10:10.941 --rc geninfo_all_blocks=1 00:10:10.941 --rc geninfo_unexecuted_blocks=1 00:10:10.941 00:10:10.941 ' 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:10.941 ************************************ 00:10:10.941 START TEST nvmf_example 00:10:10.941 ************************************ 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:10.941 * Looking for test storage... 00:10:10.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:10.941 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:11.199 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:11.199 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.199 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.199 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.199 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.199 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.199 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.199 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.199 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.199 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.199 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.199 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.199 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:11.199 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:11.199 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.199 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.199 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:11.199 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:11.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.200 --rc genhtml_branch_coverage=1 00:10:11.200 --rc genhtml_function_coverage=1 00:10:11.200 --rc genhtml_legend=1 00:10:11.200 --rc geninfo_all_blocks=1 00:10:11.200 --rc geninfo_unexecuted_blocks=1 00:10:11.200 00:10:11.200 ' 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:11.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.200 --rc genhtml_branch_coverage=1 00:10:11.200 --rc genhtml_function_coverage=1 00:10:11.200 --rc genhtml_legend=1 00:10:11.200 --rc geninfo_all_blocks=1 00:10:11.200 --rc geninfo_unexecuted_blocks=1 00:10:11.200 00:10:11.200 ' 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:11.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.200 --rc genhtml_branch_coverage=1 00:10:11.200 --rc genhtml_function_coverage=1 00:10:11.200 --rc genhtml_legend=1 00:10:11.200 --rc geninfo_all_blocks=1 00:10:11.200 --rc geninfo_unexecuted_blocks=1 00:10:11.200 00:10:11.200 ' 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:11.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.200 --rc genhtml_branch_coverage=1 00:10:11.200 --rc genhtml_function_coverage=1 00:10:11.200 --rc genhtml_legend=1 00:10:11.200 --rc geninfo_all_blocks=1 00:10:11.200 --rc geninfo_unexecuted_blocks=1 00:10:11.200 00:10:11.200 ' 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.200 07:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:16.464 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:16.464 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:16.464 Found net devices under 0000:86:00.0: cvl_0_0 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:16.464 Found net devices under 0000:86:00.1: cvl_0_1 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.464 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:16.465 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:16.465 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.465 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:16.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:10:16.723 00:10:16.723 --- 10.0.0.2 ping statistics --- 00:10:16.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.723 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:10:16.723 00:10:16.723 --- 10.0.0.1 ping statistics --- 00:10:16.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.723 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2348648 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2348648 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2348648 ']' 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.723 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.658 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.658 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:17.658 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:17.658 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.658 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.658 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:17.658 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.658 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.658 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.658 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:17.658 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.658 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.916 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.916 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:17.916 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:17.916 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.916 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.916 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.916 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:17.916 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:17.916 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.916 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.916 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.916 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.916 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.916 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.916 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.916 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:17.916 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:27.883 Initializing NVMe Controllers 00:10:27.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:27.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:27.883 Initialization complete. Launching workers. 00:10:27.883 ======================================================== 00:10:27.883 Latency(us) 00:10:27.883 Device Information : IOPS MiB/s Average min max 00:10:27.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17907.99 69.95 3573.16 613.92 15433.95 00:10:27.883 ======================================================== 00:10:27.883 Total : 17907.99 69.95 3573.16 613.92 15433.95 00:10:27.883 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:28.142 rmmod nvme_tcp 00:10:28.142 rmmod nvme_fabrics 00:10:28.142 rmmod nvme_keyring 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2348648 ']' 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2348648 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2348648 ']' 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2348648 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2348648 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2348648' 00:10:28.142 killing process with pid 2348648 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2348648 00:10:28.142 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2348648 00:10:28.401 nvmf threads initialize successfully 00:10:28.401 bdev subsystem init successfully 00:10:28.401 created a nvmf target service 00:10:28.401 create targets's poll groups done 00:10:28.401 all subsystems of target started 00:10:28.401 nvmf target is running 00:10:28.401 all subsystems of target stopped 00:10:28.401 destroy targets's poll groups done 00:10:28.401 destroyed the nvmf target service 00:10:28.401 bdev subsystem finish successfully 00:10:28.401 nvmf threads destroy successfully 00:10:28.401 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:28.401 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:28.401 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:28.401 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:28.401 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:28.401 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:28.401 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:28.401 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:28.401 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:28.401 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.401 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.401 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.304 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:30.304 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:30.304 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:30.304 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:30.562 00:10:30.562 real 0m19.523s 00:10:30.562 user 0m46.072s 00:10:30.562 sys 0m5.860s 00:10:30.562 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.562 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:30.562 ************************************ 00:10:30.562 END TEST nvmf_example 00:10:30.562 ************************************ 00:10:30.562 07:53:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:30.562 07:53:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.562 07:53:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.562 07:53:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:30.562 ************************************ 00:10:30.562 START TEST nvmf_filesystem 00:10:30.562 ************************************ 00:10:30.562 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:30.562 * Looking for test storage... 00:10:30.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.562 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:30.562 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:30.562 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:30.562 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:30.562 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.562 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.562 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.562 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.563 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:30.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.826 --rc genhtml_branch_coverage=1 00:10:30.826 --rc genhtml_function_coverage=1 00:10:30.826 --rc genhtml_legend=1 00:10:30.826 --rc geninfo_all_blocks=1 00:10:30.826 --rc geninfo_unexecuted_blocks=1 00:10:30.826 00:10:30.826 ' 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:30.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.826 --rc genhtml_branch_coverage=1 00:10:30.826 --rc genhtml_function_coverage=1 00:10:30.826 --rc genhtml_legend=1 00:10:30.826 --rc geninfo_all_blocks=1 00:10:30.826 --rc geninfo_unexecuted_blocks=1 00:10:30.826 00:10:30.826 ' 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:30.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.826 --rc genhtml_branch_coverage=1 00:10:30.826 --rc genhtml_function_coverage=1 00:10:30.826 --rc genhtml_legend=1 00:10:30.826 --rc geninfo_all_blocks=1 00:10:30.826 --rc geninfo_unexecuted_blocks=1 00:10:30.826 00:10:30.826 ' 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:30.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.826 --rc genhtml_branch_coverage=1 00:10:30.826 --rc genhtml_function_coverage=1 00:10:30.826 --rc genhtml_legend=1 00:10:30.826 --rc geninfo_all_blocks=1 00:10:30.826 --rc geninfo_unexecuted_blocks=1 00:10:30.826 00:10:30.826 ' 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:30.826 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:30.827 #define SPDK_CONFIG_H 00:10:30.827 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:30.827 #define SPDK_CONFIG_APPS 1 00:10:30.827 #define SPDK_CONFIG_ARCH native 00:10:30.827 #undef SPDK_CONFIG_ASAN 00:10:30.827 #undef SPDK_CONFIG_AVAHI 00:10:30.827 #undef SPDK_CONFIG_CET 00:10:30.827 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:30.827 #define SPDK_CONFIG_COVERAGE 1 00:10:30.827 #define SPDK_CONFIG_CROSS_PREFIX 00:10:30.827 #undef SPDK_CONFIG_CRYPTO 00:10:30.827 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:30.827 #undef SPDK_CONFIG_CUSTOMOCF 00:10:30.827 #undef SPDK_CONFIG_DAOS 00:10:30.827 #define SPDK_CONFIG_DAOS_DIR 00:10:30.827 #define SPDK_CONFIG_DEBUG 1 00:10:30.827 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:30.827 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:30.827 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:30.827 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:30.827 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:30.827 #undef SPDK_CONFIG_DPDK_UADK 00:10:30.827 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:30.827 #define SPDK_CONFIG_EXAMPLES 1 00:10:30.827 #undef SPDK_CONFIG_FC 00:10:30.827 #define SPDK_CONFIG_FC_PATH 00:10:30.827 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:30.827 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:30.827 #define SPDK_CONFIG_FSDEV 1 00:10:30.827 #undef SPDK_CONFIG_FUSE 00:10:30.827 #undef SPDK_CONFIG_FUZZER 00:10:30.827 #define SPDK_CONFIG_FUZZER_LIB 00:10:30.827 #undef SPDK_CONFIG_GOLANG 00:10:30.827 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:30.827 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:30.827 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:30.827 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:30.827 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:30.827 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:30.827 #undef SPDK_CONFIG_HAVE_LZ4 00:10:30.827 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:30.827 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:30.827 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:30.827 #define SPDK_CONFIG_IDXD 1 00:10:30.827 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:30.827 #undef SPDK_CONFIG_IPSEC_MB 00:10:30.827 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:30.827 #define SPDK_CONFIG_ISAL 1 00:10:30.827 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:30.827 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:30.827 #define SPDK_CONFIG_LIBDIR 00:10:30.827 #undef SPDK_CONFIG_LTO 00:10:30.827 #define SPDK_CONFIG_MAX_LCORES 128 00:10:30.827 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:30.827 #define SPDK_CONFIG_NVME_CUSE 1 00:10:30.827 #undef SPDK_CONFIG_OCF 00:10:30.827 #define SPDK_CONFIG_OCF_PATH 00:10:30.827 #define SPDK_CONFIG_OPENSSL_PATH 00:10:30.827 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:30.827 #define SPDK_CONFIG_PGO_DIR 00:10:30.827 #undef SPDK_CONFIG_PGO_USE 00:10:30.827 #define SPDK_CONFIG_PREFIX /usr/local 00:10:30.827 #undef SPDK_CONFIG_RAID5F 00:10:30.827 #undef SPDK_CONFIG_RBD 00:10:30.827 #define SPDK_CONFIG_RDMA 1 00:10:30.827 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:30.827 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:30.827 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:30.827 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:30.827 #define SPDK_CONFIG_SHARED 1 00:10:30.827 #undef SPDK_CONFIG_SMA 00:10:30.827 #define SPDK_CONFIG_TESTS 1 00:10:30.827 #undef SPDK_CONFIG_TSAN 00:10:30.827 #define SPDK_CONFIG_UBLK 1 00:10:30.827 #define SPDK_CONFIG_UBSAN 1 00:10:30.827 #undef SPDK_CONFIG_UNIT_TESTS 00:10:30.827 #undef SPDK_CONFIG_URING 00:10:30.827 #define SPDK_CONFIG_URING_PATH 00:10:30.827 #undef SPDK_CONFIG_URING_ZNS 00:10:30.827 #undef SPDK_CONFIG_USDT 00:10:30.827 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:30.827 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:30.827 #define SPDK_CONFIG_VFIO_USER 1 00:10:30.827 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:30.827 #define SPDK_CONFIG_VHOST 1 00:10:30.827 #define SPDK_CONFIG_VIRTIO 1 00:10:30.827 #undef SPDK_CONFIG_VTUNE 00:10:30.827 #define SPDK_CONFIG_VTUNE_DIR 00:10:30.827 #define SPDK_CONFIG_WERROR 1 00:10:30.827 #define SPDK_CONFIG_WPDK_DIR 00:10:30.827 #undef SPDK_CONFIG_XNVME 00:10:30.827 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:30.827 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:30.828 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:30.829 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2350949 ]] 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2350949 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.AK3XfW 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.AK3XfW/tests/target /tmp/spdk.AK3XfW 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189102645248 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963961344 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6861316096 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971949568 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97980649472 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1331200 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:30.830 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:30.831 * Looking for test storage... 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189102645248 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9075908608 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:30.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.831 --rc genhtml_branch_coverage=1 00:10:30.831 --rc genhtml_function_coverage=1 00:10:30.831 --rc genhtml_legend=1 00:10:30.831 --rc geninfo_all_blocks=1 00:10:30.831 --rc geninfo_unexecuted_blocks=1 00:10:30.831 00:10:30.831 ' 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:30.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.831 --rc genhtml_branch_coverage=1 00:10:30.831 --rc genhtml_function_coverage=1 00:10:30.831 --rc genhtml_legend=1 00:10:30.831 --rc geninfo_all_blocks=1 00:10:30.831 --rc geninfo_unexecuted_blocks=1 00:10:30.831 00:10:30.831 ' 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:30.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.831 --rc genhtml_branch_coverage=1 00:10:30.831 --rc genhtml_function_coverage=1 00:10:30.831 --rc genhtml_legend=1 00:10:30.831 --rc geninfo_all_blocks=1 00:10:30.831 --rc geninfo_unexecuted_blocks=1 00:10:30.831 00:10:30.831 ' 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:30.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.831 --rc genhtml_branch_coverage=1 00:10:30.831 --rc genhtml_function_coverage=1 00:10:30.831 --rc genhtml_legend=1 00:10:30.831 --rc geninfo_all_blocks=1 00:10:30.831 --rc geninfo_unexecuted_blocks=1 00:10:30.831 00:10:30.831 ' 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.831 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.832 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.832 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.832 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.832 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.832 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.832 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.832 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.832 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.832 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.832 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:30.832 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.832 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:30.832 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.832 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.832 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.832 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.832 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.832 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:31.092 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:31.092 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:31.092 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:31.092 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:31.092 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:31.092 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:31.092 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:31.092 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.092 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:31.092 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:31.092 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:31.092 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.092 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.092 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.092 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:31.092 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:31.092 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:31.092 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:36.358 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.358 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:36.358 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:36.359 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:36.359 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:36.359 Found net devices under 0000:86:00.0: cvl_0_0 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:36.359 Found net devices under 0000:86:00.1: cvl_0_1 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:36.359 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:36.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:10:36.618 00:10:36.618 --- 10.0.0.2 ping statistics --- 00:10:36.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.618 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:10:36.618 00:10:36.618 --- 10.0.0.1 ping statistics --- 00:10:36.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.618 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:36.618 ************************************ 00:10:36.618 START TEST nvmf_filesystem_no_in_capsule 00:10:36.618 ************************************ 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2354190 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2354190 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2354190 ']' 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.618 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.618 [2024-11-27 07:53:30.690088] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:10:36.618 [2024-11-27 07:53:30.690139] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.876 [2024-11-27 07:53:30.760843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.876 [2024-11-27 07:53:30.804941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.876 [2024-11-27 07:53:30.804983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.876 [2024-11-27 07:53:30.804992] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.876 [2024-11-27 07:53:30.804999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.876 [2024-11-27 07:53:30.805020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.876 [2024-11-27 07:53:30.806620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.876 [2024-11-27 07:53:30.806717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.876 [2024-11-27 07:53:30.806736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.876 [2024-11-27 07:53:30.806742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.876 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.876 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:36.876 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:36.876 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:36.876 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.876 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.876 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:36.876 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:36.876 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.876 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.876 [2024-11-27 07:53:30.948912] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.876 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.876 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:36.876 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.876 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.134 Malloc1 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.135 [2024-11-27 07:53:31.109726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:37.135 { 00:10:37.135 "name": "Malloc1", 00:10:37.135 "aliases": [ 00:10:37.135 "9120ad38-cf7f-4931-abee-550c1da51171" 00:10:37.135 ], 00:10:37.135 "product_name": "Malloc disk", 00:10:37.135 "block_size": 512, 00:10:37.135 "num_blocks": 1048576, 00:10:37.135 "uuid": "9120ad38-cf7f-4931-abee-550c1da51171", 00:10:37.135 "assigned_rate_limits": { 00:10:37.135 "rw_ios_per_sec": 0, 00:10:37.135 "rw_mbytes_per_sec": 0, 00:10:37.135 "r_mbytes_per_sec": 0, 00:10:37.135 "w_mbytes_per_sec": 0 00:10:37.135 }, 00:10:37.135 "claimed": true, 00:10:37.135 "claim_type": "exclusive_write", 00:10:37.135 "zoned": false, 00:10:37.135 "supported_io_types": { 00:10:37.135 "read": true, 00:10:37.135 "write": true, 00:10:37.135 "unmap": true, 00:10:37.135 "flush": true, 00:10:37.135 "reset": true, 00:10:37.135 "nvme_admin": false, 00:10:37.135 "nvme_io": false, 00:10:37.135 "nvme_io_md": false, 00:10:37.135 "write_zeroes": true, 00:10:37.135 "zcopy": true, 00:10:37.135 "get_zone_info": false, 00:10:37.135 "zone_management": false, 00:10:37.135 "zone_append": false, 00:10:37.135 "compare": false, 00:10:37.135 "compare_and_write": false, 00:10:37.135 "abort": true, 00:10:37.135 "seek_hole": false, 00:10:37.135 "seek_data": false, 00:10:37.135 "copy": true, 00:10:37.135 "nvme_iov_md": false 00:10:37.135 }, 00:10:37.135 "memory_domains": [ 00:10:37.135 { 00:10:37.135 "dma_device_id": "system", 00:10:37.135 "dma_device_type": 1 00:10:37.135 }, 00:10:37.135 { 00:10:37.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.135 "dma_device_type": 2 00:10:37.135 } 00:10:37.135 ], 00:10:37.135 "driver_specific": {} 00:10:37.135 } 00:10:37.135 ]' 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:37.135 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:38.508 07:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:38.508 07:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:38.508 07:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:38.508 07:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:38.508 07:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:40.407 07:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:40.407 07:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.407 07:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:40.407 07:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:40.407 07:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.407 07:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:40.407 07:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:40.407 07:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:40.407 07:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:40.407 07:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:40.407 07:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:40.407 07:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:40.407 07:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:40.407 07:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:40.407 07:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:40.407 07:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:40.407 07:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:40.664 07:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:41.230 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:42.166 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:42.166 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:42.166 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:42.166 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.166 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.166 ************************************ 00:10:42.166 START TEST filesystem_ext4 00:10:42.166 ************************************ 00:10:42.166 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:42.166 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:42.166 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:42.166 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:42.166 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:42.166 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:42.166 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:42.166 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:42.166 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:42.166 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:42.166 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:42.166 mke2fs 1.47.0 (5-Feb-2023) 00:10:42.166 Discarding device blocks: 0/522240 done 00:10:42.425 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:42.425 Filesystem UUID: bd5f375f-e86e-4dce-85a7-40428b39678a 00:10:42.425 Superblock backups stored on blocks: 00:10:42.425 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:42.425 00:10:42.425 Allocating group tables: 0/64 done 00:10:42.425 Writing inode tables: 0/64 done 00:10:44.956 Creating journal (8192 blocks): done 00:10:44.956 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:10:44.956 00:10:44.956 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:44.956 07:53:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2354190 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:51.519 00:10:51.519 real 0m9.010s 00:10:51.519 user 0m0.031s 00:10:51.519 sys 0m0.071s 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:51.519 ************************************ 00:10:51.519 END TEST filesystem_ext4 00:10:51.519 ************************************ 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.519 ************************************ 00:10:51.519 START TEST filesystem_btrfs 00:10:51.519 ************************************ 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:51.519 btrfs-progs v6.8.1 00:10:51.519 See https://btrfs.readthedocs.io for more information. 00:10:51.519 00:10:51.519 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:51.519 NOTE: several default settings have changed in version 5.15, please make sure 00:10:51.519 this does not affect your deployments: 00:10:51.519 - DUP for metadata (-m dup) 00:10:51.519 - enabled no-holes (-O no-holes) 00:10:51.519 - enabled free-space-tree (-R free-space-tree) 00:10:51.519 00:10:51.519 Label: (null) 00:10:51.519 UUID: 250f78ca-5962-40cd-9d58-d94074bdc94e 00:10:51.519 Node size: 16384 00:10:51.519 Sector size: 4096 (CPU page size: 4096) 00:10:51.519 Filesystem size: 510.00MiB 00:10:51.519 Block group profiles: 00:10:51.519 Data: single 8.00MiB 00:10:51.519 Metadata: DUP 32.00MiB 00:10:51.519 System: DUP 8.00MiB 00:10:51.519 SSD detected: yes 00:10:51.519 Zoned device: no 00:10:51.519 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:51.519 Checksum: crc32c 00:10:51.519 Number of devices: 1 00:10:51.519 Devices: 00:10:51.519 ID SIZE PATH 00:10:51.519 1 510.00MiB /dev/nvme0n1p1 00:10:51.519 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:51.519 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2354190 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.456 00:10:52.456 real 0m1.122s 00:10:52.456 user 0m0.023s 00:10:52.456 sys 0m0.118s 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:52.456 ************************************ 00:10:52.456 END TEST filesystem_btrfs 00:10:52.456 ************************************ 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.456 ************************************ 00:10:52.456 START TEST filesystem_xfs 00:10:52.456 ************************************ 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:52.456 07:53:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:52.456 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:52.456 = sectsz=512 attr=2, projid32bit=1 00:10:52.456 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:52.456 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:52.456 data = bsize=4096 blocks=130560, imaxpct=25 00:10:52.456 = sunit=0 swidth=0 blks 00:10:52.456 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:52.456 log =internal log bsize=4096 blocks=16384, version=2 00:10:52.456 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:52.456 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:53.388 Discarding blocks...Done. 00:10:53.388 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:53.388 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2354190 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:55.290 00:10:55.290 real 0m2.779s 00:10:55.290 user 0m0.030s 00:10:55.290 sys 0m0.069s 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:55.290 ************************************ 00:10:55.290 END TEST filesystem_xfs 00:10:55.290 ************************************ 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:55.290 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:55.291 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.291 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.291 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:55.291 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:55.291 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.291 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.291 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.291 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.291 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:55.291 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2354190 00:10:55.291 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2354190 ']' 00:10:55.291 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2354190 00:10:55.549 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:55.549 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.549 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2354190 00:10:55.549 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.549 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.549 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2354190' 00:10:55.549 killing process with pid 2354190 00:10:55.550 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2354190 00:10:55.550 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2354190 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:55.808 00:10:55.808 real 0m19.146s 00:10:55.808 user 1m15.371s 00:10:55.808 sys 0m1.455s 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.808 ************************************ 00:10:55.808 END TEST nvmf_filesystem_no_in_capsule 00:10:55.808 ************************************ 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.808 ************************************ 00:10:55.808 START TEST nvmf_filesystem_in_capsule 00:10:55.808 ************************************ 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2357479 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2357479 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2357479 ']' 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.808 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.808 [2024-11-27 07:53:49.901239] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:10:55.808 [2024-11-27 07:53:49.901279] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.071 [2024-11-27 07:53:49.968120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.071 [2024-11-27 07:53:50.010243] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.071 [2024-11-27 07:53:50.010279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.071 [2024-11-27 07:53:50.010288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.071 [2024-11-27 07:53:50.010295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.071 [2024-11-27 07:53:50.010301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.071 [2024-11-27 07:53:50.011796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.071 [2024-11-27 07:53:50.011812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.071 [2024-11-27 07:53:50.011830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.071 [2024-11-27 07:53:50.011832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.071 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.071 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:56.071 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:56.071 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:56.071 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.071 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.071 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:56.071 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:56.071 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.071 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.071 [2024-11-27 07:53:50.159551] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.071 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.071 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:56.071 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.071 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.358 Malloc1 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.358 [2024-11-27 07:53:50.320114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:56.358 { 00:10:56.358 "name": "Malloc1", 00:10:56.358 "aliases": [ 00:10:56.358 "25881003-5543-43e0-9049-736448765dd7" 00:10:56.358 ], 00:10:56.358 "product_name": "Malloc disk", 00:10:56.358 "block_size": 512, 00:10:56.358 "num_blocks": 1048576, 00:10:56.358 "uuid": "25881003-5543-43e0-9049-736448765dd7", 00:10:56.358 "assigned_rate_limits": { 00:10:56.358 "rw_ios_per_sec": 0, 00:10:56.358 "rw_mbytes_per_sec": 0, 00:10:56.358 "r_mbytes_per_sec": 0, 00:10:56.358 "w_mbytes_per_sec": 0 00:10:56.358 }, 00:10:56.358 "claimed": true, 00:10:56.358 "claim_type": "exclusive_write", 00:10:56.358 "zoned": false, 00:10:56.358 "supported_io_types": { 00:10:56.358 "read": true, 00:10:56.358 "write": true, 00:10:56.358 "unmap": true, 00:10:56.358 "flush": true, 00:10:56.358 "reset": true, 00:10:56.358 "nvme_admin": false, 00:10:56.358 "nvme_io": false, 00:10:56.358 "nvme_io_md": false, 00:10:56.358 "write_zeroes": true, 00:10:56.358 "zcopy": true, 00:10:56.358 "get_zone_info": false, 00:10:56.358 "zone_management": false, 00:10:56.358 "zone_append": false, 00:10:56.358 "compare": false, 00:10:56.358 "compare_and_write": false, 00:10:56.358 "abort": true, 00:10:56.358 "seek_hole": false, 00:10:56.358 "seek_data": false, 00:10:56.358 "copy": true, 00:10:56.358 "nvme_iov_md": false 00:10:56.358 }, 00:10:56.358 "memory_domains": [ 00:10:56.358 { 00:10:56.358 "dma_device_id": "system", 00:10:56.358 "dma_device_type": 1 00:10:56.358 }, 00:10:56.358 { 00:10:56.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.358 "dma_device_type": 2 00:10:56.358 } 00:10:56.358 ], 00:10:56.358 "driver_specific": {} 00:10:56.358 } 00:10:56.358 ]' 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:56.358 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:56.359 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:56.359 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:56.359 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:56.359 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:56.359 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:56.359 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:57.811 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:57.811 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:57.811 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:57.811 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:57.811 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:59.707 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:59.707 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:59.707 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.707 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:59.707 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.707 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:59.707 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:59.707 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:59.707 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:59.707 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:59.707 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:59.707 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:59.707 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:59.707 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:59.707 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:59.707 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:59.707 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:59.962 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:00.222 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:01.594 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:01.594 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:01.594 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:01.594 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.594 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.594 ************************************ 00:11:01.594 START TEST filesystem_in_capsule_ext4 00:11:01.594 ************************************ 00:11:01.594 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:01.594 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:01.594 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:01.594 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:01.594 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:01.594 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:01.594 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:01.594 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:01.594 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:01.594 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:01.594 07:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:01.594 mke2fs 1.47.0 (5-Feb-2023) 00:11:01.594 Discarding device blocks: 0/522240 done 00:11:01.594 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:01.594 Filesystem UUID: 59fda7c5-44bb-4e4b-ade4-9399251695fc 00:11:01.594 Superblock backups stored on blocks: 00:11:01.594 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:01.594 00:11:01.594 Allocating group tables: 0/64 done 00:11:01.594 Writing inode tables: 0/64 done 00:11:04.371 Creating journal (8192 blocks): done 00:11:05.754 Writing superblocks and filesystem accounting information: 0/64 done 00:11:05.754 00:11:05.754 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:05.754 07:53:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:12.313 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:12.313 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:12.313 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:12.313 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:12.313 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2357479 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:12.314 00:11:12.314 real 0m10.167s 00:11:12.314 user 0m0.017s 00:11:12.314 sys 0m0.085s 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:12.314 ************************************ 00:11:12.314 END TEST filesystem_in_capsule_ext4 00:11:12.314 ************************************ 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.314 ************************************ 00:11:12.314 START TEST filesystem_in_capsule_btrfs 00:11:12.314 ************************************ 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:12.314 07:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:12.314 btrfs-progs v6.8.1 00:11:12.314 See https://btrfs.readthedocs.io for more information. 00:11:12.314 00:11:12.314 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:12.314 NOTE: several default settings have changed in version 5.15, please make sure 00:11:12.314 this does not affect your deployments: 00:11:12.314 - DUP for metadata (-m dup) 00:11:12.314 - enabled no-holes (-O no-holes) 00:11:12.314 - enabled free-space-tree (-R free-space-tree) 00:11:12.314 00:11:12.314 Label: (null) 00:11:12.314 UUID: 94915d90-1503-4884-bf44-6414f5ae2cd9 00:11:12.314 Node size: 16384 00:11:12.314 Sector size: 4096 (CPU page size: 4096) 00:11:12.314 Filesystem size: 510.00MiB 00:11:12.314 Block group profiles: 00:11:12.314 Data: single 8.00MiB 00:11:12.314 Metadata: DUP 32.00MiB 00:11:12.314 System: DUP 8.00MiB 00:11:12.314 SSD detected: yes 00:11:12.314 Zoned device: no 00:11:12.314 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:12.314 Checksum: crc32c 00:11:12.314 Number of devices: 1 00:11:12.314 Devices: 00:11:12.314 ID SIZE PATH 00:11:12.314 1 510.00MiB /dev/nvme0n1p1 00:11:12.314 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2357479 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:12.314 00:11:12.314 real 0m0.752s 00:11:12.314 user 0m0.026s 00:11:12.314 sys 0m0.114s 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:12.314 ************************************ 00:11:12.314 END TEST filesystem_in_capsule_btrfs 00:11:12.314 ************************************ 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.314 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.572 ************************************ 00:11:12.572 START TEST filesystem_in_capsule_xfs 00:11:12.572 ************************************ 00:11:12.572 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:12.572 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:12.572 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:12.572 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:12.572 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:12.572 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:12.572 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:12.572 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:12.572 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:12.572 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:12.572 07:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:12.572 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:12.572 = sectsz=512 attr=2, projid32bit=1 00:11:12.572 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:12.572 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:12.572 data = bsize=4096 blocks=130560, imaxpct=25 00:11:12.572 = sunit=0 swidth=0 blks 00:11:12.572 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:12.573 log =internal log bsize=4096 blocks=16384, version=2 00:11:12.573 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:12.573 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:13.504 Discarding blocks...Done. 00:11:13.504 07:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:13.504 07:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:15.401 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:15.401 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:15.658 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:15.658 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:15.658 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:15.658 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:15.658 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2357479 00:11:15.658 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:15.658 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:15.658 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:15.658 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:15.658 00:11:15.658 real 0m3.129s 00:11:15.658 user 0m0.025s 00:11:15.658 sys 0m0.074s 00:11:15.658 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.658 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:15.658 ************************************ 00:11:15.658 END TEST filesystem_in_capsule_xfs 00:11:15.658 ************************************ 00:11:15.658 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:15.658 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:15.658 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2357479 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2357479 ']' 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2357479 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2357479 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2357479' 00:11:15.917 killing process with pid 2357479 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2357479 00:11:15.917 07:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2357479 00:11:16.175 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:16.175 00:11:16.175 real 0m20.359s 00:11:16.175 user 1m20.193s 00:11:16.175 sys 0m1.524s 00:11:16.175 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.175 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.175 ************************************ 00:11:16.175 END TEST nvmf_filesystem_in_capsule 00:11:16.175 ************************************ 00:11:16.175 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:16.175 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.175 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:16.176 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.176 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:16.176 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.176 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.176 rmmod nvme_tcp 00:11:16.176 rmmod nvme_fabrics 00:11:16.176 rmmod nvme_keyring 00:11:16.434 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.434 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:16.434 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:16.434 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:16.434 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.434 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.434 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.434 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:16.434 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:16.434 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.434 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.434 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.434 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:16.434 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.434 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.434 07:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.336 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:18.336 00:11:18.336 real 0m47.871s 00:11:18.336 user 2m37.531s 00:11:18.336 sys 0m7.413s 00:11:18.336 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.336 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.336 ************************************ 00:11:18.336 END TEST nvmf_filesystem 00:11:18.336 ************************************ 00:11:18.336 07:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:18.336 07:54:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:18.336 07:54:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.336 07:54:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:18.336 ************************************ 00:11:18.336 START TEST nvmf_target_discovery 00:11:18.336 ************************************ 00:11:18.336 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:18.595 * Looking for test storage... 00:11:18.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:18.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.595 --rc genhtml_branch_coverage=1 00:11:18.595 --rc genhtml_function_coverage=1 00:11:18.595 --rc genhtml_legend=1 00:11:18.595 --rc geninfo_all_blocks=1 00:11:18.595 --rc geninfo_unexecuted_blocks=1 00:11:18.595 00:11:18.595 ' 00:11:18.595 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:18.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.595 --rc genhtml_branch_coverage=1 00:11:18.595 --rc genhtml_function_coverage=1 00:11:18.595 --rc genhtml_legend=1 00:11:18.596 --rc geninfo_all_blocks=1 00:11:18.596 --rc geninfo_unexecuted_blocks=1 00:11:18.596 00:11:18.596 ' 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:18.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.596 --rc genhtml_branch_coverage=1 00:11:18.596 --rc genhtml_function_coverage=1 00:11:18.596 --rc genhtml_legend=1 00:11:18.596 --rc geninfo_all_blocks=1 00:11:18.596 --rc geninfo_unexecuted_blocks=1 00:11:18.596 00:11:18.596 ' 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:18.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.596 --rc genhtml_branch_coverage=1 00:11:18.596 --rc genhtml_function_coverage=1 00:11:18.596 --rc genhtml_legend=1 00:11:18.596 --rc geninfo_all_blocks=1 00:11:18.596 --rc geninfo_unexecuted_blocks=1 00:11:18.596 00:11:18.596 ' 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:18.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.596 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:18.597 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:18.597 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:18.597 07:54:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:23.869 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:23.869 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:23.869 Found net devices under 0000:86:00.0: cvl_0_0 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:23.869 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:23.870 Found net devices under 0000:86:00.1: cvl_0_1 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:23.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:11:23.870 00:11:23.870 --- 10.0.0.2 ping statistics --- 00:11:23.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.870 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:11:23.870 00:11:23.870 --- 10.0.0.1 ping statistics --- 00:11:23.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.870 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2364940 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2364940 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2364940 ']' 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.870 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.870 [2024-11-27 07:54:17.789868] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:11:23.870 [2024-11-27 07:54:17.789913] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.870 [2024-11-27 07:54:17.855055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.870 [2024-11-27 07:54:17.898021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.870 [2024-11-27 07:54:17.898060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.870 [2024-11-27 07:54:17.898067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.870 [2024-11-27 07:54:17.898073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.870 [2024-11-27 07:54:17.898078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.870 [2024-11-27 07:54:17.899732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.870 [2024-11-27 07:54:17.899833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.870 [2024-11-27 07:54:17.899927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.870 [2024-11-27 07:54:17.899929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.129 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.129 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:24.129 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.129 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:24.129 07:54:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.129 [2024-11-27 07:54:18.042727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.129 Null1 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.129 [2024-11-27 07:54:18.102107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.129 Null2 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.129 Null3 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:24.129 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.130 Null4 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.130 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:24.388 00:11:24.388 Discovery Log Number of Records 6, Generation counter 6 00:11:24.388 =====Discovery Log Entry 0====== 00:11:24.388 trtype: tcp 00:11:24.388 adrfam: ipv4 00:11:24.388 subtype: current discovery subsystem 00:11:24.388 treq: not required 00:11:24.388 portid: 0 00:11:24.388 trsvcid: 4420 00:11:24.388 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:24.388 traddr: 10.0.0.2 00:11:24.388 eflags: explicit discovery connections, duplicate discovery information 00:11:24.388 sectype: none 00:11:24.388 =====Discovery Log Entry 1====== 00:11:24.388 trtype: tcp 00:11:24.388 adrfam: ipv4 00:11:24.388 subtype: nvme subsystem 00:11:24.388 treq: not required 00:11:24.388 portid: 0 00:11:24.388 trsvcid: 4420 00:11:24.388 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:24.388 traddr: 10.0.0.2 00:11:24.388 eflags: none 00:11:24.388 sectype: none 00:11:24.388 =====Discovery Log Entry 2====== 00:11:24.388 trtype: tcp 00:11:24.388 adrfam: ipv4 00:11:24.388 subtype: nvme subsystem 00:11:24.388 treq: not required 00:11:24.388 portid: 0 00:11:24.388 trsvcid: 4420 00:11:24.388 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:24.388 traddr: 10.0.0.2 00:11:24.388 eflags: none 00:11:24.388 sectype: none 00:11:24.388 =====Discovery Log Entry 3====== 00:11:24.388 trtype: tcp 00:11:24.388 adrfam: ipv4 00:11:24.388 subtype: nvme subsystem 00:11:24.388 treq: not required 00:11:24.388 portid: 0 00:11:24.388 trsvcid: 4420 00:11:24.388 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:24.388 traddr: 10.0.0.2 00:11:24.388 eflags: none 00:11:24.388 sectype: none 00:11:24.388 =====Discovery Log Entry 4====== 00:11:24.388 trtype: tcp 00:11:24.388 adrfam: ipv4 00:11:24.388 subtype: nvme subsystem 00:11:24.388 treq: not required 00:11:24.388 portid: 0 00:11:24.388 trsvcid: 4420 00:11:24.388 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:24.388 traddr: 10.0.0.2 00:11:24.388 eflags: none 00:11:24.388 sectype: none 00:11:24.388 =====Discovery Log Entry 5====== 00:11:24.388 trtype: tcp 00:11:24.388 adrfam: ipv4 00:11:24.388 subtype: discovery subsystem referral 00:11:24.388 treq: not required 00:11:24.388 portid: 0 00:11:24.388 trsvcid: 4430 00:11:24.388 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:24.388 traddr: 10.0.0.2 00:11:24.388 eflags: none 00:11:24.388 sectype: none 00:11:24.388 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:24.388 Perform nvmf subsystem discovery via RPC 00:11:24.388 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:24.388 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.388 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.388 [ 00:11:24.388 { 00:11:24.388 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:24.389 "subtype": "Discovery", 00:11:24.389 "listen_addresses": [ 00:11:24.389 { 00:11:24.389 "trtype": "TCP", 00:11:24.389 "adrfam": "IPv4", 00:11:24.389 "traddr": "10.0.0.2", 00:11:24.389 "trsvcid": "4420" 00:11:24.389 } 00:11:24.389 ], 00:11:24.389 "allow_any_host": true, 00:11:24.389 "hosts": [] 00:11:24.389 }, 00:11:24.389 { 00:11:24.389 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.389 "subtype": "NVMe", 00:11:24.389 "listen_addresses": [ 00:11:24.389 { 00:11:24.389 "trtype": "TCP", 00:11:24.389 "adrfam": "IPv4", 00:11:24.389 "traddr": "10.0.0.2", 00:11:24.389 "trsvcid": "4420" 00:11:24.389 } 00:11:24.389 ], 00:11:24.389 "allow_any_host": true, 00:11:24.389 "hosts": [], 00:11:24.389 "serial_number": "SPDK00000000000001", 00:11:24.389 "model_number": "SPDK bdev Controller", 00:11:24.389 "max_namespaces": 32, 00:11:24.389 "min_cntlid": 1, 00:11:24.389 "max_cntlid": 65519, 00:11:24.389 "namespaces": [ 00:11:24.389 { 00:11:24.389 "nsid": 1, 00:11:24.389 "bdev_name": "Null1", 00:11:24.389 "name": "Null1", 00:11:24.389 "nguid": "F39597C5D6E04012B9227C10F388E073", 00:11:24.389 "uuid": "f39597c5-d6e0-4012-b922-7c10f388e073" 00:11:24.389 } 00:11:24.389 ] 00:11:24.389 }, 00:11:24.389 { 00:11:24.389 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:24.389 "subtype": "NVMe", 00:11:24.389 "listen_addresses": [ 00:11:24.389 { 00:11:24.389 "trtype": "TCP", 00:11:24.389 "adrfam": "IPv4", 00:11:24.389 "traddr": "10.0.0.2", 00:11:24.389 "trsvcid": "4420" 00:11:24.389 } 00:11:24.389 ], 00:11:24.389 "allow_any_host": true, 00:11:24.389 "hosts": [], 00:11:24.389 "serial_number": "SPDK00000000000002", 00:11:24.389 "model_number": "SPDK bdev Controller", 00:11:24.389 "max_namespaces": 32, 00:11:24.389 "min_cntlid": 1, 00:11:24.389 "max_cntlid": 65519, 00:11:24.389 "namespaces": [ 00:11:24.389 { 00:11:24.389 "nsid": 1, 00:11:24.389 "bdev_name": "Null2", 00:11:24.389 "name": "Null2", 00:11:24.389 "nguid": "7D61F814322B463CB6BDF7565CCAA7F8", 00:11:24.389 "uuid": "7d61f814-322b-463c-b6bd-f7565ccaa7f8" 00:11:24.389 } 00:11:24.389 ] 00:11:24.389 }, 00:11:24.389 { 00:11:24.389 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:24.389 "subtype": "NVMe", 00:11:24.389 "listen_addresses": [ 00:11:24.389 { 00:11:24.389 "trtype": "TCP", 00:11:24.389 "adrfam": "IPv4", 00:11:24.389 "traddr": "10.0.0.2", 00:11:24.389 "trsvcid": "4420" 00:11:24.389 } 00:11:24.389 ], 00:11:24.389 "allow_any_host": true, 00:11:24.389 "hosts": [], 00:11:24.389 "serial_number": "SPDK00000000000003", 00:11:24.389 "model_number": "SPDK bdev Controller", 00:11:24.389 "max_namespaces": 32, 00:11:24.389 "min_cntlid": 1, 00:11:24.389 "max_cntlid": 65519, 00:11:24.389 "namespaces": [ 00:11:24.389 { 00:11:24.389 "nsid": 1, 00:11:24.389 "bdev_name": "Null3", 00:11:24.389 "name": "Null3", 00:11:24.389 "nguid": "AFCB54DC66F844E39225BB0E879D2033", 00:11:24.389 "uuid": "afcb54dc-66f8-44e3-9225-bb0e879d2033" 00:11:24.389 } 00:11:24.389 ] 00:11:24.389 }, 00:11:24.389 { 00:11:24.389 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:24.389 "subtype": "NVMe", 00:11:24.389 "listen_addresses": [ 00:11:24.389 { 00:11:24.389 "trtype": "TCP", 00:11:24.389 "adrfam": "IPv4", 00:11:24.389 "traddr": "10.0.0.2", 00:11:24.389 "trsvcid": "4420" 00:11:24.389 } 00:11:24.389 ], 00:11:24.389 "allow_any_host": true, 00:11:24.389 "hosts": [], 00:11:24.389 "serial_number": "SPDK00000000000004", 00:11:24.389 "model_number": "SPDK bdev Controller", 00:11:24.389 "max_namespaces": 32, 00:11:24.389 "min_cntlid": 1, 00:11:24.389 "max_cntlid": 65519, 00:11:24.389 "namespaces": [ 00:11:24.389 { 00:11:24.389 "nsid": 1, 00:11:24.389 "bdev_name": "Null4", 00:11:24.389 "name": "Null4", 00:11:24.389 "nguid": "801B1EA3C8E245D6A8331434FA6FD6A7", 00:11:24.389 "uuid": "801b1ea3-c8e2-45d6-a833-1434fa6fd6a7" 00:11:24.389 } 00:11:24.389 ] 00:11:24.389 } 00:11:24.389 ] 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.389 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:24.648 rmmod nvme_tcp 00:11:24.648 rmmod nvme_fabrics 00:11:24.648 rmmod nvme_keyring 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2364940 ']' 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2364940 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2364940 ']' 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2364940 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2364940 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2364940' 00:11:24.648 killing process with pid 2364940 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2364940 00:11:24.648 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2364940 00:11:24.907 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:24.907 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:24.907 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:24.907 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:24.907 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:24.907 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:24.907 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:24.907 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:24.907 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:24.907 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.907 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.907 07:54:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.811 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:26.811 00:11:26.811 real 0m8.479s 00:11:26.811 user 0m5.317s 00:11:26.811 sys 0m4.213s 00:11:26.811 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.811 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.811 ************************************ 00:11:26.811 END TEST nvmf_target_discovery 00:11:26.811 ************************************ 00:11:27.070 07:54:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:27.070 07:54:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:27.070 07:54:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.070 07:54:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:27.070 ************************************ 00:11:27.070 START TEST nvmf_referrals 00:11:27.070 ************************************ 00:11:27.070 07:54:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:27.070 * Looking for test storage... 00:11:27.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:27.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.070 --rc genhtml_branch_coverage=1 00:11:27.070 --rc genhtml_function_coverage=1 00:11:27.070 --rc genhtml_legend=1 00:11:27.070 --rc geninfo_all_blocks=1 00:11:27.070 --rc geninfo_unexecuted_blocks=1 00:11:27.070 00:11:27.070 ' 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:27.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.070 --rc genhtml_branch_coverage=1 00:11:27.070 --rc genhtml_function_coverage=1 00:11:27.070 --rc genhtml_legend=1 00:11:27.070 --rc geninfo_all_blocks=1 00:11:27.070 --rc geninfo_unexecuted_blocks=1 00:11:27.070 00:11:27.070 ' 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:27.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.070 --rc genhtml_branch_coverage=1 00:11:27.070 --rc genhtml_function_coverage=1 00:11:27.070 --rc genhtml_legend=1 00:11:27.070 --rc geninfo_all_blocks=1 00:11:27.070 --rc geninfo_unexecuted_blocks=1 00:11:27.070 00:11:27.070 ' 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:27.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.070 --rc genhtml_branch_coverage=1 00:11:27.070 --rc genhtml_function_coverage=1 00:11:27.070 --rc genhtml_legend=1 00:11:27.070 --rc geninfo_all_blocks=1 00:11:27.070 --rc geninfo_unexecuted_blocks=1 00:11:27.070 00:11:27.070 ' 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:27.070 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:27.071 07:54:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:33.637 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:33.637 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:33.637 Found net devices under 0000:86:00.0: cvl_0_0 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:33.637 Found net devices under 0000:86:00.1: cvl_0_1 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:33.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:11:33.637 00:11:33.637 --- 10.0.0.2 ping statistics --- 00:11:33.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.637 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:11:33.637 00:11:33.637 --- 10.0.0.1 ping statistics --- 00:11:33.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.637 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2368710 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2368710 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2368710 ']' 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.637 07:54:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.637 [2024-11-27 07:54:27.009689] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:11:33.638 [2024-11-27 07:54:27.009736] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.638 [2024-11-27 07:54:27.076218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.638 [2024-11-27 07:54:27.118703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.638 [2024-11-27 07:54:27.118741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.638 [2024-11-27 07:54:27.118748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.638 [2024-11-27 07:54:27.118754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.638 [2024-11-27 07:54:27.118758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.638 [2024-11-27 07:54:27.120191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.638 [2024-11-27 07:54:27.120292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.638 [2024-11-27 07:54:27.120366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.638 [2024-11-27 07:54:27.120368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.638 [2024-11-27 07:54:27.258251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.638 [2024-11-27 07:54:27.281119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:33.638 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:33.897 07:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:34.156 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:34.156 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:34.156 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:34.156 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:34.156 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:34.156 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.156 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:34.413 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:34.413 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:34.413 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:34.413 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:34.413 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.413 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.680 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:35.004 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:35.004 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:35.004 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:35.004 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:35.004 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:35.004 07:54:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:35.004 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:35.004 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:35.004 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.004 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:35.004 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.004 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:35.004 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:35.004 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.004 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:35.004 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.004 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:35.004 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:35.004 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:35.004 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:35.281 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:35.282 rmmod nvme_tcp 00:11:35.282 rmmod nvme_fabrics 00:11:35.282 rmmod nvme_keyring 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2368710 ']' 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2368710 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2368710 ']' 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2368710 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.282 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2368710 00:11:35.543 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.543 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.543 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2368710' 00:11:35.543 killing process with pid 2368710 00:11:35.543 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2368710 00:11:35.543 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2368710 00:11:35.543 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:35.543 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:35.543 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:35.543 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:35.543 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:35.543 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:35.543 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:35.543 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:35.543 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:35.543 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.543 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.543 07:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:38.079 00:11:38.079 real 0m10.678s 00:11:38.079 user 0m12.337s 00:11:38.079 sys 0m5.037s 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.079 ************************************ 00:11:38.079 END TEST nvmf_referrals 00:11:38.079 ************************************ 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:38.079 ************************************ 00:11:38.079 START TEST nvmf_connect_disconnect 00:11:38.079 ************************************ 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:38.079 * Looking for test storage... 00:11:38.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:38.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.079 --rc genhtml_branch_coverage=1 00:11:38.079 --rc genhtml_function_coverage=1 00:11:38.079 --rc genhtml_legend=1 00:11:38.079 --rc geninfo_all_blocks=1 00:11:38.079 --rc geninfo_unexecuted_blocks=1 00:11:38.079 00:11:38.079 ' 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:38.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.079 --rc genhtml_branch_coverage=1 00:11:38.079 --rc genhtml_function_coverage=1 00:11:38.079 --rc genhtml_legend=1 00:11:38.079 --rc geninfo_all_blocks=1 00:11:38.079 --rc geninfo_unexecuted_blocks=1 00:11:38.079 00:11:38.079 ' 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:38.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.079 --rc genhtml_branch_coverage=1 00:11:38.079 --rc genhtml_function_coverage=1 00:11:38.079 --rc genhtml_legend=1 00:11:38.079 --rc geninfo_all_blocks=1 00:11:38.079 --rc geninfo_unexecuted_blocks=1 00:11:38.079 00:11:38.079 ' 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:38.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.079 --rc genhtml_branch_coverage=1 00:11:38.079 --rc genhtml_function_coverage=1 00:11:38.079 --rc genhtml_legend=1 00:11:38.079 --rc geninfo_all_blocks=1 00:11:38.079 --rc geninfo_unexecuted_blocks=1 00:11:38.079 00:11:38.079 ' 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:38.079 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:38.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:38.080 07:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:43.352 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:43.352 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.352 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:43.611 Found net devices under 0000:86:00.0: cvl_0_0 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:43.611 Found net devices under 0000:86:00.1: cvl_0_1 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:43.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:11:43.611 00:11:43.611 --- 10.0.0.2 ping statistics --- 00:11:43.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.611 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:11:43.611 00:11:43.611 --- 10.0.0.1 ping statistics --- 00:11:43.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.611 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:43.611 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:43.869 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:43.869 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:43.869 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.869 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.869 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2372791 00:11:43.869 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2372791 00:11:43.869 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.869 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2372791 ']' 00:11:43.869 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.869 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.869 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.869 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.869 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.869 [2024-11-27 07:54:37.788010] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:11:43.869 [2024-11-27 07:54:37.788058] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.869 [2024-11-27 07:54:37.852753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.869 [2024-11-27 07:54:37.895587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.869 [2024-11-27 07:54:37.895623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.869 [2024-11-27 07:54:37.895630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.869 [2024-11-27 07:54:37.895636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.870 [2024-11-27 07:54:37.895641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.870 [2024-11-27 07:54:37.897125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.870 [2024-11-27 07:54:37.897220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.870 [2024-11-27 07:54:37.897325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.870 [2024-11-27 07:54:37.897327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.128 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.128 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:44.128 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.128 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:44.128 07:54:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.128 [2024-11-27 07:54:38.036297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.128 [2024-11-27 07:54:38.097194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:44.128 07:54:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:47.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.543 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:00.543 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:00.543 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:00.543 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:00.544 rmmod nvme_tcp 00:12:00.544 rmmod nvme_fabrics 00:12:00.544 rmmod nvme_keyring 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2372791 ']' 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2372791 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2372791 ']' 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2372791 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2372791 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2372791' 00:12:00.544 killing process with pid 2372791 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2372791 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2372791 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.544 07:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.078 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:03.078 00:12:03.078 real 0m24.922s 00:12:03.078 user 1m7.752s 00:12:03.078 sys 0m5.760s 00:12:03.078 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.078 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.078 ************************************ 00:12:03.078 END TEST nvmf_connect_disconnect 00:12:03.078 ************************************ 00:12:03.078 07:54:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:03.078 07:54:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:03.078 07:54:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.078 07:54:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:03.078 ************************************ 00:12:03.078 START TEST nvmf_multitarget 00:12:03.078 ************************************ 00:12:03.078 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:03.078 * Looking for test storage... 00:12:03.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.078 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:03.078 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:03.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.079 --rc genhtml_branch_coverage=1 00:12:03.079 --rc genhtml_function_coverage=1 00:12:03.079 --rc genhtml_legend=1 00:12:03.079 --rc geninfo_all_blocks=1 00:12:03.079 --rc geninfo_unexecuted_blocks=1 00:12:03.079 00:12:03.079 ' 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:03.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.079 --rc genhtml_branch_coverage=1 00:12:03.079 --rc genhtml_function_coverage=1 00:12:03.079 --rc genhtml_legend=1 00:12:03.079 --rc geninfo_all_blocks=1 00:12:03.079 --rc geninfo_unexecuted_blocks=1 00:12:03.079 00:12:03.079 ' 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:03.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.079 --rc genhtml_branch_coverage=1 00:12:03.079 --rc genhtml_function_coverage=1 00:12:03.079 --rc genhtml_legend=1 00:12:03.079 --rc geninfo_all_blocks=1 00:12:03.079 --rc geninfo_unexecuted_blocks=1 00:12:03.079 00:12:03.079 ' 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:03.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.079 --rc genhtml_branch_coverage=1 00:12:03.079 --rc genhtml_function_coverage=1 00:12:03.079 --rc genhtml_legend=1 00:12:03.079 --rc geninfo_all_blocks=1 00:12:03.079 --rc geninfo_unexecuted_blocks=1 00:12:03.079 00:12:03.079 ' 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:03.079 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:03.080 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.080 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:03.080 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:03.080 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:03.080 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.080 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.080 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.080 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:03.080 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:03.080 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:03.080 07:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:08.352 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:08.352 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:08.352 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:08.353 Found net devices under 0000:86:00.0: cvl_0_0 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:08.353 Found net devices under 0000:86:00.1: cvl_0_1 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:08.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:12:08.353 00:12:08.353 --- 10.0.0.2 ping statistics --- 00:12:08.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.353 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:12:08.353 00:12:08.353 --- 10.0.0.1 ping statistics --- 00:12:08.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.353 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:08.353 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:08.613 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:08.613 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:08.613 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:08.613 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:08.613 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2378980 00:12:08.613 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.613 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2378980 00:12:08.613 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2378980 ']' 00:12:08.613 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.613 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.613 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.613 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.613 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:08.613 [2024-11-27 07:55:02.519149] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:12:08.613 [2024-11-27 07:55:02.519194] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.613 [2024-11-27 07:55:02.586256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.613 [2024-11-27 07:55:02.629431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.613 [2024-11-27 07:55:02.629468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.613 [2024-11-27 07:55:02.629479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.613 [2024-11-27 07:55:02.629485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.613 [2024-11-27 07:55:02.629490] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.613 [2024-11-27 07:55:02.631090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.613 [2024-11-27 07:55:02.631187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.613 [2024-11-27 07:55:02.631280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.613 [2024-11-27 07:55:02.631281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.873 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.873 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:08.873 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:08.873 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:08.873 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:08.873 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.873 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:08.873 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:08.873 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:08.873 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:08.873 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:08.873 "nvmf_tgt_1" 00:12:09.131 07:55:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:09.131 "nvmf_tgt_2" 00:12:09.131 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:09.131 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:09.131 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:09.131 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:09.391 true 00:12:09.391 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:09.391 true 00:12:09.391 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:09.391 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:09.651 rmmod nvme_tcp 00:12:09.651 rmmod nvme_fabrics 00:12:09.651 rmmod nvme_keyring 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2378980 ']' 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2378980 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2378980 ']' 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2378980 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2378980 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2378980' 00:12:09.651 killing process with pid 2378980 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2378980 00:12:09.651 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2378980 00:12:09.910 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:09.910 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:09.910 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:09.910 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:09.910 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:09.910 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:09.910 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:09.910 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:09.910 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:09.910 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.911 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.911 07:55:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.816 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:11.816 00:12:11.816 real 0m9.198s 00:12:11.816 user 0m7.198s 00:12:11.816 sys 0m4.608s 00:12:11.816 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.816 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:11.816 ************************************ 00:12:11.816 END TEST nvmf_multitarget 00:12:11.816 ************************************ 00:12:12.075 07:55:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:12.075 07:55:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:12.075 07:55:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.075 07:55:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.075 ************************************ 00:12:12.075 START TEST nvmf_rpc 00:12:12.075 ************************************ 00:12:12.075 07:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:12.075 * Looking for test storage... 00:12:12.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:12.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.075 --rc genhtml_branch_coverage=1 00:12:12.075 --rc genhtml_function_coverage=1 00:12:12.075 --rc genhtml_legend=1 00:12:12.075 --rc geninfo_all_blocks=1 00:12:12.075 --rc geninfo_unexecuted_blocks=1 00:12:12.075 00:12:12.075 ' 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:12.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.075 --rc genhtml_branch_coverage=1 00:12:12.075 --rc genhtml_function_coverage=1 00:12:12.075 --rc genhtml_legend=1 00:12:12.075 --rc geninfo_all_blocks=1 00:12:12.075 --rc geninfo_unexecuted_blocks=1 00:12:12.075 00:12:12.075 ' 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:12.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.075 --rc genhtml_branch_coverage=1 00:12:12.075 --rc genhtml_function_coverage=1 00:12:12.075 --rc genhtml_legend=1 00:12:12.075 --rc geninfo_all_blocks=1 00:12:12.075 --rc geninfo_unexecuted_blocks=1 00:12:12.075 00:12:12.075 ' 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:12.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.075 --rc genhtml_branch_coverage=1 00:12:12.075 --rc genhtml_function_coverage=1 00:12:12.075 --rc genhtml_legend=1 00:12:12.075 --rc geninfo_all_blocks=1 00:12:12.075 --rc geninfo_unexecuted_blocks=1 00:12:12.075 00:12:12.075 ' 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.075 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:12.076 07:55:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:17.344 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:17.344 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:17.344 Found net devices under 0000:86:00.0: cvl_0_0 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:17.344 Found net devices under 0000:86:00.1: cvl_0_1 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.344 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:17.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:12:17.345 00:12:17.345 --- 10.0.0.2 ping statistics --- 00:12:17.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.345 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:12:17.345 00:12:17.345 --- 10.0.0.1 ping statistics --- 00:12:17.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.345 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2382751 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2382751 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2382751 ']' 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.345 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.345 [2024-11-27 07:55:11.434249] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:12:17.345 [2024-11-27 07:55:11.434293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.603 [2024-11-27 07:55:11.500365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.603 [2024-11-27 07:55:11.543429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.603 [2024-11-27 07:55:11.543467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.603 [2024-11-27 07:55:11.543475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.603 [2024-11-27 07:55:11.543481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.603 [2024-11-27 07:55:11.543486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.603 [2024-11-27 07:55:11.544928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.603 [2024-11-27 07:55:11.544960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.603 [2024-11-27 07:55:11.545014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.603 [2024-11-27 07:55:11.545016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.603 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.603 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:17.603 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:17.603 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:17.603 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.603 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.603 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:17.603 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.603 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.603 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.603 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:17.603 "tick_rate": 2300000000, 00:12:17.603 "poll_groups": [ 00:12:17.603 { 00:12:17.603 "name": "nvmf_tgt_poll_group_000", 00:12:17.603 "admin_qpairs": 0, 00:12:17.603 "io_qpairs": 0, 00:12:17.603 "current_admin_qpairs": 0, 00:12:17.603 "current_io_qpairs": 0, 00:12:17.603 "pending_bdev_io": 0, 00:12:17.603 "completed_nvme_io": 0, 00:12:17.603 "transports": [] 00:12:17.603 }, 00:12:17.603 { 00:12:17.603 "name": "nvmf_tgt_poll_group_001", 00:12:17.603 "admin_qpairs": 0, 00:12:17.603 "io_qpairs": 0, 00:12:17.603 "current_admin_qpairs": 0, 00:12:17.603 "current_io_qpairs": 0, 00:12:17.603 "pending_bdev_io": 0, 00:12:17.603 "completed_nvme_io": 0, 00:12:17.603 "transports": [] 00:12:17.603 }, 00:12:17.603 { 00:12:17.603 "name": "nvmf_tgt_poll_group_002", 00:12:17.603 "admin_qpairs": 0, 00:12:17.603 "io_qpairs": 0, 00:12:17.603 "current_admin_qpairs": 0, 00:12:17.603 "current_io_qpairs": 0, 00:12:17.603 "pending_bdev_io": 0, 00:12:17.603 "completed_nvme_io": 0, 00:12:17.603 "transports": [] 00:12:17.603 }, 00:12:17.603 { 00:12:17.603 "name": "nvmf_tgt_poll_group_003", 00:12:17.603 "admin_qpairs": 0, 00:12:17.603 "io_qpairs": 0, 00:12:17.603 "current_admin_qpairs": 0, 00:12:17.603 "current_io_qpairs": 0, 00:12:17.603 "pending_bdev_io": 0, 00:12:17.603 "completed_nvme_io": 0, 00:12:17.603 "transports": [] 00:12:17.603 } 00:12:17.603 ] 00:12:17.603 }' 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.862 [2024-11-27 07:55:11.800090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:17.862 "tick_rate": 2300000000, 00:12:17.862 "poll_groups": [ 00:12:17.862 { 00:12:17.862 "name": "nvmf_tgt_poll_group_000", 00:12:17.862 "admin_qpairs": 0, 00:12:17.862 "io_qpairs": 0, 00:12:17.862 "current_admin_qpairs": 0, 00:12:17.862 "current_io_qpairs": 0, 00:12:17.862 "pending_bdev_io": 0, 00:12:17.862 "completed_nvme_io": 0, 00:12:17.862 "transports": [ 00:12:17.862 { 00:12:17.862 "trtype": "TCP" 00:12:17.862 } 00:12:17.862 ] 00:12:17.862 }, 00:12:17.862 { 00:12:17.862 "name": "nvmf_tgt_poll_group_001", 00:12:17.862 "admin_qpairs": 0, 00:12:17.862 "io_qpairs": 0, 00:12:17.862 "current_admin_qpairs": 0, 00:12:17.862 "current_io_qpairs": 0, 00:12:17.862 "pending_bdev_io": 0, 00:12:17.862 "completed_nvme_io": 0, 00:12:17.862 "transports": [ 00:12:17.862 { 00:12:17.862 "trtype": "TCP" 00:12:17.862 } 00:12:17.862 ] 00:12:17.862 }, 00:12:17.862 { 00:12:17.862 "name": "nvmf_tgt_poll_group_002", 00:12:17.862 "admin_qpairs": 0, 00:12:17.862 "io_qpairs": 0, 00:12:17.862 "current_admin_qpairs": 0, 00:12:17.862 "current_io_qpairs": 0, 00:12:17.862 "pending_bdev_io": 0, 00:12:17.862 "completed_nvme_io": 0, 00:12:17.862 "transports": [ 00:12:17.862 { 00:12:17.862 "trtype": "TCP" 00:12:17.862 } 00:12:17.862 ] 00:12:17.862 }, 00:12:17.862 { 00:12:17.862 "name": "nvmf_tgt_poll_group_003", 00:12:17.862 "admin_qpairs": 0, 00:12:17.862 "io_qpairs": 0, 00:12:17.862 "current_admin_qpairs": 0, 00:12:17.862 "current_io_qpairs": 0, 00:12:17.862 "pending_bdev_io": 0, 00:12:17.862 "completed_nvme_io": 0, 00:12:17.862 "transports": [ 00:12:17.862 { 00:12:17.862 "trtype": "TCP" 00:12:17.862 } 00:12:17.862 ] 00:12:17.862 } 00:12:17.862 ] 00:12:17.862 }' 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.862 Malloc1 00:12:17.862 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.863 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:17.863 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.863 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.863 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.863 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:17.863 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.863 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.863 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.863 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:17.863 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.863 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.121 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.121 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.121 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.121 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.121 [2024-11-27 07:55:11.976997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.121 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.121 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:18.121 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:18.121 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:18.121 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:18.121 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.121 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:18.121 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.121 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:18.121 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.121 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:18.121 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:18.121 07:55:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:18.121 [2024-11-27 07:55:12.005558] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:18.121 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:18.121 could not add new controller: failed to write to nvme-fabrics device 00:12:18.121 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:18.121 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:18.121 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:18.121 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:18.121 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:18.121 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.121 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.121 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.121 07:55:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.495 07:55:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.495 07:55:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:19.495 07:55:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.495 07:55:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:19.495 07:55:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:21.422 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.423 [2024-11-27 07:55:15.431816] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:21.423 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:21.423 could not add new controller: failed to write to nvme-fabrics device 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.423 07:55:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.797 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.797 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:22.797 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.797 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:22.797 07:55:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.853 [2024-11-27 07:55:18.806111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.853 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.854 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:24.854 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.854 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.854 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.854 07:55:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.226 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:26.226 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:26.226 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:26.226 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:26.226 07:55:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:28.126 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:28.126 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:28.126 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:28.126 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:28.126 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.126 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:28.126 07:55:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.126 [2024-11-27 07:55:22.081325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.126 07:55:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.500 07:55:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.500 07:55:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:29.500 07:55:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.500 07:55:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:29.500 07:55:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:31.399 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:31.399 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:31.399 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.399 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:31.399 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.399 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:31.399 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.400 [2024-11-27 07:55:25.403449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.400 07:55:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.774 07:55:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.774 07:55:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:32.774 07:55:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.774 07:55:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:32.774 07:55:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.674 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.675 [2024-11-27 07:55:28.706372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.675 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.675 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:34.675 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.675 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.675 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.675 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.675 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.675 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.675 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.675 07:55:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.064 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.064 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:36.064 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.064 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:36.064 07:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.965 07:55:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.965 [2024-11-27 07:55:32.015064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.965 07:55:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.340 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.340 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:39.340 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.340 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:39.340 07:55:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:41.242 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:41.242 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:41.242 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.242 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:41.242 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.242 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:41.242 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.242 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.242 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:41.242 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:41.242 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.242 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:41.242 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.501 [2024-11-27 07:55:35.397810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.501 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 [2024-11-27 07:55:35.445885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 [2024-11-27 07:55:35.494035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 [2024-11-27 07:55:35.542187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.502 [2024-11-27 07:55:35.590359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.502 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.503 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.503 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.503 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.503 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.503 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.761 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.761 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.761 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.761 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.761 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.761 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.761 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.761 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.761 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.761 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:41.761 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:41.762 "tick_rate": 2300000000, 00:12:41.762 "poll_groups": [ 00:12:41.762 { 00:12:41.762 "name": "nvmf_tgt_poll_group_000", 00:12:41.762 "admin_qpairs": 2, 00:12:41.762 "io_qpairs": 168, 00:12:41.762 "current_admin_qpairs": 0, 00:12:41.762 "current_io_qpairs": 0, 00:12:41.762 "pending_bdev_io": 0, 00:12:41.762 "completed_nvme_io": 222, 00:12:41.762 "transports": [ 00:12:41.762 { 00:12:41.762 "trtype": "TCP" 00:12:41.762 } 00:12:41.762 ] 00:12:41.762 }, 00:12:41.762 { 00:12:41.762 "name": "nvmf_tgt_poll_group_001", 00:12:41.762 "admin_qpairs": 2, 00:12:41.762 "io_qpairs": 168, 00:12:41.762 "current_admin_qpairs": 0, 00:12:41.762 "current_io_qpairs": 0, 00:12:41.762 "pending_bdev_io": 0, 00:12:41.762 "completed_nvme_io": 301, 00:12:41.762 "transports": [ 00:12:41.762 { 00:12:41.762 "trtype": "TCP" 00:12:41.762 } 00:12:41.762 ] 00:12:41.762 }, 00:12:41.762 { 00:12:41.762 "name": "nvmf_tgt_poll_group_002", 00:12:41.762 "admin_qpairs": 1, 00:12:41.762 "io_qpairs": 168, 00:12:41.762 "current_admin_qpairs": 0, 00:12:41.762 "current_io_qpairs": 0, 00:12:41.762 "pending_bdev_io": 0, 00:12:41.762 "completed_nvme_io": 266, 00:12:41.762 "transports": [ 00:12:41.762 { 00:12:41.762 "trtype": "TCP" 00:12:41.762 } 00:12:41.762 ] 00:12:41.762 }, 00:12:41.762 { 00:12:41.762 "name": "nvmf_tgt_poll_group_003", 00:12:41.762 "admin_qpairs": 2, 00:12:41.762 "io_qpairs": 168, 00:12:41.762 "current_admin_qpairs": 0, 00:12:41.762 "current_io_qpairs": 0, 00:12:41.762 "pending_bdev_io": 0, 00:12:41.762 "completed_nvme_io": 233, 00:12:41.762 "transports": [ 00:12:41.762 { 00:12:41.762 "trtype": "TCP" 00:12:41.762 } 00:12:41.762 ] 00:12:41.762 } 00:12:41.762 ] 00:12:41.762 }' 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:41.762 rmmod nvme_tcp 00:12:41.762 rmmod nvme_fabrics 00:12:41.762 rmmod nvme_keyring 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2382751 ']' 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2382751 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2382751 ']' 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2382751 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.762 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2382751 00:12:42.021 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.021 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.021 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2382751' 00:12:42.021 killing process with pid 2382751 00:12:42.022 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2382751 00:12:42.022 07:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2382751 00:12:42.022 07:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:42.022 07:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:42.022 07:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:42.022 07:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:42.022 07:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:42.022 07:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:42.022 07:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:42.022 07:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:42.022 07:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:42.022 07:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.022 07:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.022 07:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:44.553 00:12:44.553 real 0m32.161s 00:12:44.553 user 1m39.024s 00:12:44.553 sys 0m5.976s 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.553 ************************************ 00:12:44.553 END TEST nvmf_rpc 00:12:44.553 ************************************ 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:44.553 ************************************ 00:12:44.553 START TEST nvmf_invalid 00:12:44.553 ************************************ 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:44.553 * Looking for test storage... 00:12:44.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:44.553 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:44.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.554 --rc genhtml_branch_coverage=1 00:12:44.554 --rc genhtml_function_coverage=1 00:12:44.554 --rc genhtml_legend=1 00:12:44.554 --rc geninfo_all_blocks=1 00:12:44.554 --rc geninfo_unexecuted_blocks=1 00:12:44.554 00:12:44.554 ' 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:44.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.554 --rc genhtml_branch_coverage=1 00:12:44.554 --rc genhtml_function_coverage=1 00:12:44.554 --rc genhtml_legend=1 00:12:44.554 --rc geninfo_all_blocks=1 00:12:44.554 --rc geninfo_unexecuted_blocks=1 00:12:44.554 00:12:44.554 ' 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:44.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.554 --rc genhtml_branch_coverage=1 00:12:44.554 --rc genhtml_function_coverage=1 00:12:44.554 --rc genhtml_legend=1 00:12:44.554 --rc geninfo_all_blocks=1 00:12:44.554 --rc geninfo_unexecuted_blocks=1 00:12:44.554 00:12:44.554 ' 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:44.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.554 --rc genhtml_branch_coverage=1 00:12:44.554 --rc genhtml_function_coverage=1 00:12:44.554 --rc genhtml_legend=1 00:12:44.554 --rc geninfo_all_blocks=1 00:12:44.554 --rc geninfo_unexecuted_blocks=1 00:12:44.554 00:12:44.554 ' 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:44.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:44.554 07:55:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:49.824 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:49.824 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:49.824 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:49.825 Found net devices under 0000:86:00.0: cvl_0_0 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:49.825 Found net devices under 0000:86:00.1: cvl_0_1 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:49.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:12:49.825 00:12:49.825 --- 10.0.0.2 ping statistics --- 00:12:49.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.825 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:49.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:12:49.825 00:12:49.825 --- 10.0.0.1 ping statistics --- 00:12:49.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.825 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2390361 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2390361 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2390361 ']' 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:49.825 [2024-11-27 07:55:43.539469] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:12:49.825 [2024-11-27 07:55:43.539514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.825 [2024-11-27 07:55:43.604930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.825 [2024-11-27 07:55:43.647517] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.825 [2024-11-27 07:55:43.647555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.825 [2024-11-27 07:55:43.647562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.825 [2024-11-27 07:55:43.647568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.825 [2024-11-27 07:55:43.647573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.825 [2024-11-27 07:55:43.649070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.825 [2024-11-27 07:55:43.649168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.825 [2024-11-27 07:55:43.649229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.825 [2024-11-27 07:55:43.649231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:49.825 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22535 00:12:50.084 [2024-11-27 07:55:43.956384] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:50.084 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:50.084 { 00:12:50.084 "nqn": "nqn.2016-06.io.spdk:cnode22535", 00:12:50.084 "tgt_name": "foobar", 00:12:50.084 "method": "nvmf_create_subsystem", 00:12:50.084 "req_id": 1 00:12:50.084 } 00:12:50.084 Got JSON-RPC error response 00:12:50.084 response: 00:12:50.084 { 00:12:50.084 "code": -32603, 00:12:50.084 "message": "Unable to find target foobar" 00:12:50.084 }' 00:12:50.084 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:50.084 { 00:12:50.084 "nqn": "nqn.2016-06.io.spdk:cnode22535", 00:12:50.084 "tgt_name": "foobar", 00:12:50.084 "method": "nvmf_create_subsystem", 00:12:50.084 "req_id": 1 00:12:50.084 } 00:12:50.084 Got JSON-RPC error response 00:12:50.084 response: 00:12:50.084 { 00:12:50.084 "code": -32603, 00:12:50.084 "message": "Unable to find target foobar" 00:12:50.084 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:50.084 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:50.084 07:55:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24775 00:12:50.084 [2024-11-27 07:55:44.157070] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24775: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:50.084 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:50.084 { 00:12:50.084 "nqn": "nqn.2016-06.io.spdk:cnode24775", 00:12:50.084 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:50.084 "method": "nvmf_create_subsystem", 00:12:50.084 "req_id": 1 00:12:50.084 } 00:12:50.084 Got JSON-RPC error response 00:12:50.084 response: 00:12:50.084 { 00:12:50.084 "code": -32602, 00:12:50.084 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:50.084 }' 00:12:50.084 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:50.084 { 00:12:50.084 "nqn": "nqn.2016-06.io.spdk:cnode24775", 00:12:50.084 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:50.084 "method": "nvmf_create_subsystem", 00:12:50.084 "req_id": 1 00:12:50.084 } 00:12:50.084 Got JSON-RPC error response 00:12:50.084 response: 00:12:50.084 { 00:12:50.084 "code": -32602, 00:12:50.084 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:50.084 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:50.084 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:50.342 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30708 00:12:50.342 [2024-11-27 07:55:44.357742] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30708: invalid model number 'SPDK_Controller' 00:12:50.342 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:50.342 { 00:12:50.342 "nqn": "nqn.2016-06.io.spdk:cnode30708", 00:12:50.342 "model_number": "SPDK_Controller\u001f", 00:12:50.342 "method": "nvmf_create_subsystem", 00:12:50.342 "req_id": 1 00:12:50.342 } 00:12:50.342 Got JSON-RPC error response 00:12:50.342 response: 00:12:50.342 { 00:12:50.342 "code": -32602, 00:12:50.342 "message": "Invalid MN SPDK_Controller\u001f" 00:12:50.342 }' 00:12:50.342 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:50.342 { 00:12:50.342 "nqn": "nqn.2016-06.io.spdk:cnode30708", 00:12:50.342 "model_number": "SPDK_Controller\u001f", 00:12:50.342 "method": "nvmf_create_subsystem", 00:12:50.342 "req_id": 1 00:12:50.342 } 00:12:50.342 Got JSON-RPC error response 00:12:50.342 response: 00:12:50.342 { 00:12:50.342 "code": -32602, 00:12:50.342 "message": "Invalid MN SPDK_Controller\u001f" 00:12:50.342 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.343 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.601 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.602 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:50.602 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:50.602 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:50.602 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.602 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.602 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ g == \- ]] 00:12:50.602 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'g!:f}k%Im06s)YJ`D<[^v' 00:12:50.602 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'g!:f}k%Im06s)YJ`D<[^v' nqn.2016-06.io.spdk:cnode10848 00:12:50.602 [2024-11-27 07:55:44.706951] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10848: invalid serial number 'g!:f}k%Im06s)YJ`D<[^v' 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:50.861 { 00:12:50.861 "nqn": "nqn.2016-06.io.spdk:cnode10848", 00:12:50.861 "serial_number": "g!:f}k%Im06s)YJ`D<[^v", 00:12:50.861 "method": "nvmf_create_subsystem", 00:12:50.861 "req_id": 1 00:12:50.861 } 00:12:50.861 Got JSON-RPC error response 00:12:50.861 response: 00:12:50.861 { 00:12:50.861 "code": -32602, 00:12:50.861 "message": "Invalid SN g!:f}k%Im06s)YJ`D<[^v" 00:12:50.861 }' 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:50.861 { 00:12:50.861 "nqn": "nqn.2016-06.io.spdk:cnode10848", 00:12:50.861 "serial_number": "g!:f}k%Im06s)YJ`D<[^v", 00:12:50.861 "method": "nvmf_create_subsystem", 00:12:50.861 "req_id": 1 00:12:50.861 } 00:12:50.861 Got JSON-RPC error response 00:12:50.861 response: 00:12:50.861 { 00:12:50.861 "code": -32602, 00:12:50.861 "message": "Invalid SN g!:f}k%Im06s)YJ`D<[^v" 00:12:50.861 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:50.861 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:50.862 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.863 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.120 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.121 07:55:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:51.121 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:51.121 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:51.121 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.121 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.121 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:51.121 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:51.121 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:51.121 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.121 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.121 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ _ == \- ]] 00:12:51.121 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '_?2,abe3sLV]U)u@Y,,m}#;8GfV4>kIPbV3Gc!C2m' 00:12:51.121 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '_?2,abe3sLV]U)u@Y,,m}#;8GfV4>kIPbV3Gc!C2m' nqn.2016-06.io.spdk:cnode11782 00:12:51.121 [2024-11-27 07:55:45.184525] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11782: invalid model number '_?2,abe3sLV]U)u@Y,,m}#;8GfV4>kIPbV3Gc!C2m' 00:12:51.121 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:51.121 { 00:12:51.121 "nqn": "nqn.2016-06.io.spdk:cnode11782", 00:12:51.121 "model_number": "_?2,abe3sLV]U)u@Y,,m}#;8GfV4>kIPbV3Gc!C2m", 00:12:51.121 "method": "nvmf_create_subsystem", 00:12:51.121 "req_id": 1 00:12:51.121 } 00:12:51.121 Got JSON-RPC error response 00:12:51.121 response: 00:12:51.121 { 00:12:51.121 "code": -32602, 00:12:51.121 "message": "Invalid MN _?2,abe3sLV]U)u@Y,,m}#;8GfV4>kIPbV3Gc!C2m" 00:12:51.121 }' 00:12:51.121 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:51.121 { 00:12:51.121 "nqn": "nqn.2016-06.io.spdk:cnode11782", 00:12:51.121 "model_number": "_?2,abe3sLV]U)u@Y,,m}#;8GfV4>kIPbV3Gc!C2m", 00:12:51.121 "method": "nvmf_create_subsystem", 00:12:51.121 "req_id": 1 00:12:51.121 } 00:12:51.121 Got JSON-RPC error response 00:12:51.121 response: 00:12:51.121 { 00:12:51.121 "code": -32602, 00:12:51.121 "message": "Invalid MN _?2,abe3sLV]U)u@Y,,m}#;8GfV4>kIPbV3Gc!C2m" 00:12:51.121 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:51.121 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:51.378 [2024-11-27 07:55:45.385269] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.378 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:51.636 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:51.636 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:51.636 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:51.636 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:51.636 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:51.894 [2024-11-27 07:55:45.790576] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:51.894 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:51.894 { 00:12:51.894 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:51.894 "listen_address": { 00:12:51.894 "trtype": "tcp", 00:12:51.894 "traddr": "", 00:12:51.894 "trsvcid": "4421" 00:12:51.894 }, 00:12:51.894 "method": "nvmf_subsystem_remove_listener", 00:12:51.894 "req_id": 1 00:12:51.894 } 00:12:51.894 Got JSON-RPC error response 00:12:51.894 response: 00:12:51.894 { 00:12:51.894 "code": -32602, 00:12:51.894 "message": "Invalid parameters" 00:12:51.894 }' 00:12:51.894 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:51.894 { 00:12:51.894 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:51.894 "listen_address": { 00:12:51.894 "trtype": "tcp", 00:12:51.894 "traddr": "", 00:12:51.894 "trsvcid": "4421" 00:12:51.894 }, 00:12:51.894 "method": "nvmf_subsystem_remove_listener", 00:12:51.894 "req_id": 1 00:12:51.894 } 00:12:51.894 Got JSON-RPC error response 00:12:51.894 response: 00:12:51.894 { 00:12:51.894 "code": -32602, 00:12:51.894 "message": "Invalid parameters" 00:12:51.894 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:51.894 07:55:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8293 -i 0 00:12:52.151 [2024-11-27 07:55:46.003240] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8293: invalid cntlid range [0-65519] 00:12:52.151 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:52.151 { 00:12:52.151 "nqn": "nqn.2016-06.io.spdk:cnode8293", 00:12:52.151 "min_cntlid": 0, 00:12:52.151 "method": "nvmf_create_subsystem", 00:12:52.151 "req_id": 1 00:12:52.151 } 00:12:52.151 Got JSON-RPC error response 00:12:52.151 response: 00:12:52.151 { 00:12:52.151 "code": -32602, 00:12:52.151 "message": "Invalid cntlid range [0-65519]" 00:12:52.151 }' 00:12:52.151 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:52.151 { 00:12:52.151 "nqn": "nqn.2016-06.io.spdk:cnode8293", 00:12:52.151 "min_cntlid": 0, 00:12:52.151 "method": "nvmf_create_subsystem", 00:12:52.151 "req_id": 1 00:12:52.151 } 00:12:52.151 Got JSON-RPC error response 00:12:52.151 response: 00:12:52.151 { 00:12:52.151 "code": -32602, 00:12:52.151 "message": "Invalid cntlid range [0-65519]" 00:12:52.151 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:52.151 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13620 -i 65520 00:12:52.151 [2024-11-27 07:55:46.215966] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13620: invalid cntlid range [65520-65519] 00:12:52.151 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:52.151 { 00:12:52.151 "nqn": "nqn.2016-06.io.spdk:cnode13620", 00:12:52.151 "min_cntlid": 65520, 00:12:52.151 "method": "nvmf_create_subsystem", 00:12:52.151 "req_id": 1 00:12:52.151 } 00:12:52.151 Got JSON-RPC error response 00:12:52.151 response: 00:12:52.151 { 00:12:52.151 "code": -32602, 00:12:52.151 "message": "Invalid cntlid range [65520-65519]" 00:12:52.151 }' 00:12:52.151 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:52.151 { 00:12:52.151 "nqn": "nqn.2016-06.io.spdk:cnode13620", 00:12:52.151 "min_cntlid": 65520, 00:12:52.151 "method": "nvmf_create_subsystem", 00:12:52.151 "req_id": 1 00:12:52.151 } 00:12:52.151 Got JSON-RPC error response 00:12:52.151 response: 00:12:52.151 { 00:12:52.151 "code": -32602, 00:12:52.151 "message": "Invalid cntlid range [65520-65519]" 00:12:52.151 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:52.151 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8134 -I 0 00:12:52.408 [2024-11-27 07:55:46.412644] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8134: invalid cntlid range [1-0] 00:12:52.408 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:52.408 { 00:12:52.408 "nqn": "nqn.2016-06.io.spdk:cnode8134", 00:12:52.408 "max_cntlid": 0, 00:12:52.408 "method": "nvmf_create_subsystem", 00:12:52.408 "req_id": 1 00:12:52.408 } 00:12:52.408 Got JSON-RPC error response 00:12:52.408 response: 00:12:52.408 { 00:12:52.408 "code": -32602, 00:12:52.408 "message": "Invalid cntlid range [1-0]" 00:12:52.408 }' 00:12:52.408 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:52.408 { 00:12:52.408 "nqn": "nqn.2016-06.io.spdk:cnode8134", 00:12:52.408 "max_cntlid": 0, 00:12:52.408 "method": "nvmf_create_subsystem", 00:12:52.408 "req_id": 1 00:12:52.408 } 00:12:52.408 Got JSON-RPC error response 00:12:52.408 response: 00:12:52.408 { 00:12:52.408 "code": -32602, 00:12:52.408 "message": "Invalid cntlid range [1-0]" 00:12:52.408 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:52.408 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32758 -I 65520 00:12:52.666 [2024-11-27 07:55:46.613324] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32758: invalid cntlid range [1-65520] 00:12:52.666 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:52.666 { 00:12:52.666 "nqn": "nqn.2016-06.io.spdk:cnode32758", 00:12:52.666 "max_cntlid": 65520, 00:12:52.666 "method": "nvmf_create_subsystem", 00:12:52.666 "req_id": 1 00:12:52.666 } 00:12:52.666 Got JSON-RPC error response 00:12:52.666 response: 00:12:52.666 { 00:12:52.666 "code": -32602, 00:12:52.666 "message": "Invalid cntlid range [1-65520]" 00:12:52.666 }' 00:12:52.666 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:52.666 { 00:12:52.666 "nqn": "nqn.2016-06.io.spdk:cnode32758", 00:12:52.666 "max_cntlid": 65520, 00:12:52.666 "method": "nvmf_create_subsystem", 00:12:52.666 "req_id": 1 00:12:52.666 } 00:12:52.666 Got JSON-RPC error response 00:12:52.666 response: 00:12:52.666 { 00:12:52.666 "code": -32602, 00:12:52.666 "message": "Invalid cntlid range [1-65520]" 00:12:52.666 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:52.666 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9936 -i 6 -I 5 00:12:52.925 [2024-11-27 07:55:46.810054] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9936: invalid cntlid range [6-5] 00:12:52.925 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:52.925 { 00:12:52.925 "nqn": "nqn.2016-06.io.spdk:cnode9936", 00:12:52.925 "min_cntlid": 6, 00:12:52.925 "max_cntlid": 5, 00:12:52.925 "method": "nvmf_create_subsystem", 00:12:52.925 "req_id": 1 00:12:52.925 } 00:12:52.925 Got JSON-RPC error response 00:12:52.925 response: 00:12:52.925 { 00:12:52.925 "code": -32602, 00:12:52.925 "message": "Invalid cntlid range [6-5]" 00:12:52.925 }' 00:12:52.925 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:52.925 { 00:12:52.925 "nqn": "nqn.2016-06.io.spdk:cnode9936", 00:12:52.925 "min_cntlid": 6, 00:12:52.925 "max_cntlid": 5, 00:12:52.925 "method": "nvmf_create_subsystem", 00:12:52.925 "req_id": 1 00:12:52.925 } 00:12:52.925 Got JSON-RPC error response 00:12:52.925 response: 00:12:52.925 { 00:12:52.925 "code": -32602, 00:12:52.925 "message": "Invalid cntlid range [6-5]" 00:12:52.925 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:52.925 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:52.925 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:52.925 { 00:12:52.925 "name": "foobar", 00:12:52.925 "method": "nvmf_delete_target", 00:12:52.925 "req_id": 1 00:12:52.925 } 00:12:52.925 Got JSON-RPC error response 00:12:52.925 response: 00:12:52.925 { 00:12:52.925 "code": -32602, 00:12:52.925 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:52.925 }' 00:12:52.925 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:52.925 { 00:12:52.925 "name": "foobar", 00:12:52.925 "method": "nvmf_delete_target", 00:12:52.925 "req_id": 1 00:12:52.925 } 00:12:52.925 Got JSON-RPC error response 00:12:52.925 response: 00:12:52.925 { 00:12:52.925 "code": -32602, 00:12:52.925 "message": "The specified target doesn't exist, cannot delete it." 00:12:52.925 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:52.925 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:52.925 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:52.925 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:52.925 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:52.925 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:52.925 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:52.925 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:52.925 07:55:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:52.925 rmmod nvme_tcp 00:12:52.925 rmmod nvme_fabrics 00:12:52.925 rmmod nvme_keyring 00:12:52.925 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:52.925 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:52.925 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:52.925 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2390361 ']' 00:12:52.925 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2390361 00:12:52.925 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2390361 ']' 00:12:52.925 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2390361 00:12:53.183 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:53.183 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.183 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2390361 00:12:53.183 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:53.184 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:53.184 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2390361' 00:12:53.184 killing process with pid 2390361 00:12:53.184 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2390361 00:12:53.184 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2390361 00:12:53.184 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:53.184 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:53.184 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:53.184 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:53.184 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:53.184 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:53.184 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:53.184 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:53.184 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:53.184 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.184 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.184 07:55:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:55.719 00:12:55.719 real 0m11.121s 00:12:55.719 user 0m18.232s 00:12:55.719 sys 0m4.815s 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:55.719 ************************************ 00:12:55.719 END TEST nvmf_invalid 00:12:55.719 ************************************ 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.719 ************************************ 00:12:55.719 START TEST nvmf_connect_stress 00:12:55.719 ************************************ 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:55.719 * Looking for test storage... 00:12:55.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:55.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.719 --rc genhtml_branch_coverage=1 00:12:55.719 --rc genhtml_function_coverage=1 00:12:55.719 --rc genhtml_legend=1 00:12:55.719 --rc geninfo_all_blocks=1 00:12:55.719 --rc geninfo_unexecuted_blocks=1 00:12:55.719 00:12:55.719 ' 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:55.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.719 --rc genhtml_branch_coverage=1 00:12:55.719 --rc genhtml_function_coverage=1 00:12:55.719 --rc genhtml_legend=1 00:12:55.719 --rc geninfo_all_blocks=1 00:12:55.719 --rc geninfo_unexecuted_blocks=1 00:12:55.719 00:12:55.719 ' 00:12:55.719 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:55.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.719 --rc genhtml_branch_coverage=1 00:12:55.719 --rc genhtml_function_coverage=1 00:12:55.720 --rc genhtml_legend=1 00:12:55.720 --rc geninfo_all_blocks=1 00:12:55.720 --rc geninfo_unexecuted_blocks=1 00:12:55.720 00:12:55.720 ' 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:55.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.720 --rc genhtml_branch_coverage=1 00:12:55.720 --rc genhtml_function_coverage=1 00:12:55.720 --rc genhtml_legend=1 00:12:55.720 --rc geninfo_all_blocks=1 00:12:55.720 --rc geninfo_unexecuted_blocks=1 00:12:55.720 00:12:55.720 ' 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:55.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:55.720 07:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:00.992 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:00.992 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:00.992 Found net devices under 0000:86:00.0: cvl_0_0 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:00.992 Found net devices under 0000:86:00.1: cvl_0_1 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:00.992 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:00.993 07:55:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.993 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.993 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.993 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:00.993 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:00.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:13:00.993 00:13:00.993 --- 10.0.0.2 ping statistics --- 00:13:00.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.993 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:13:00.993 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:13:00.993 00:13:00.993 --- 10.0.0.1 ping statistics --- 00:13:00.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.993 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:13:00.993 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.993 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:00.993 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:00.993 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.993 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:00.993 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:00.993 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.993 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:00.993 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:01.253 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:01.253 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:01.253 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:01.253 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.253 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2394520 00:13:01.253 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:01.253 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2394520 00:13:01.253 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2394520 ']' 00:13:01.253 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.253 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.253 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.253 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.253 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.253 [2024-11-27 07:55:55.168203] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:13:01.253 [2024-11-27 07:55:55.168254] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.253 [2024-11-27 07:55:55.236500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:01.253 [2024-11-27 07:55:55.279376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.253 [2024-11-27 07:55:55.279413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.253 [2024-11-27 07:55:55.279421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.253 [2024-11-27 07:55:55.279428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.253 [2024-11-27 07:55:55.279433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.253 [2024-11-27 07:55:55.280780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.253 [2024-11-27 07:55:55.280866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:01.253 [2024-11-27 07:55:55.280867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.512 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.512 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:01.512 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:01.512 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:01.512 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.512 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.512 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:01.512 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.512 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.512 [2024-11-27 07:55:55.418463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.512 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.513 [2024-11-27 07:55:55.438697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.513 NULL1 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2394548 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.513 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.772 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.772 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:01.772 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.772 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.772 07:55:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.339 07:55:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.339 07:55:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:02.339 07:55:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.339 07:55:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.339 07:55:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.597 07:55:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.597 07:55:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:02.597 07:55:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.597 07:55:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.597 07:55:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.856 07:55:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.856 07:55:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:02.856 07:55:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.856 07:55:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.856 07:55:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.114 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.114 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:03.114 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.114 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.114 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.373 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.373 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:03.373 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.373 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.373 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.992 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.992 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:03.992 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.992 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.992 07:55:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.321 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.321 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:04.321 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.321 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.321 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.603 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.603 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:04.603 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.603 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.603 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.861 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.862 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:04.862 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.862 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.862 07:55:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.120 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.120 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:05.120 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.120 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.120 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.378 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.378 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:05.378 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.378 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.378 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.944 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.944 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:05.944 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.944 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.944 07:55:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.202 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.202 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:06.202 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.202 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.202 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.460 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.460 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:06.460 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.460 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.460 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.718 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.718 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:06.718 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.718 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.718 07:56:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.977 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.977 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:06.977 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.977 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.977 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.542 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.542 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:07.542 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.542 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.542 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.801 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.801 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:07.801 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.801 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.801 07:56:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.059 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.059 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:08.059 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.059 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.059 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.318 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.318 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:08.318 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.318 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.318 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.576 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.576 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:08.576 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.576 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.576 07:56:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.142 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.143 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:09.143 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.143 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.143 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.400 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.400 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:09.400 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.400 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.400 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.659 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.659 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:09.659 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.659 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.659 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.917 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.917 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:09.917 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.917 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.917 07:56:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.484 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.484 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:10.484 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.484 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.484 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.741 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.741 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:10.741 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.741 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.742 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.000 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.000 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:11.000 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.000 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.000 07:56:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.258 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.258 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:11.258 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.258 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.258 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.515 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:11.515 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.516 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2394548 00:13:11.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2394548) - No such process 00:13:11.516 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2394548 00:13:11.516 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:11.516 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:11.516 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:11.516 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:11.516 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:11.516 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:11.516 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:11.774 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:11.774 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:11.774 rmmod nvme_tcp 00:13:11.774 rmmod nvme_fabrics 00:13:11.774 rmmod nvme_keyring 00:13:11.774 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:11.774 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:11.774 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:11.774 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2394520 ']' 00:13:11.774 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2394520 00:13:11.774 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2394520 ']' 00:13:11.774 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2394520 00:13:11.774 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:11.774 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.774 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2394520 00:13:11.774 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:11.774 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:11.774 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2394520' 00:13:11.774 killing process with pid 2394520 00:13:11.774 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2394520 00:13:11.774 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2394520 00:13:12.032 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:12.032 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:12.032 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:12.032 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:12.032 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:12.032 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:12.032 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:12.032 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:12.032 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:12.032 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.032 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.033 07:56:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.933 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:13.933 00:13:13.933 real 0m18.591s 00:13:13.933 user 0m39.423s 00:13:13.933 sys 0m8.117s 00:13:13.933 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.933 07:56:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.933 ************************************ 00:13:13.933 END TEST nvmf_connect_stress 00:13:13.933 ************************************ 00:13:13.933 07:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:13.933 07:56:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:13.933 07:56:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.933 07:56:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.193 ************************************ 00:13:14.193 START TEST nvmf_fused_ordering 00:13:14.193 ************************************ 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:14.193 * Looking for test storage... 00:13:14.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:14.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.193 --rc genhtml_branch_coverage=1 00:13:14.193 --rc genhtml_function_coverage=1 00:13:14.193 --rc genhtml_legend=1 00:13:14.193 --rc geninfo_all_blocks=1 00:13:14.193 --rc geninfo_unexecuted_blocks=1 00:13:14.193 00:13:14.193 ' 00:13:14.193 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:14.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.193 --rc genhtml_branch_coverage=1 00:13:14.194 --rc genhtml_function_coverage=1 00:13:14.194 --rc genhtml_legend=1 00:13:14.194 --rc geninfo_all_blocks=1 00:13:14.194 --rc geninfo_unexecuted_blocks=1 00:13:14.194 00:13:14.194 ' 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:14.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.194 --rc genhtml_branch_coverage=1 00:13:14.194 --rc genhtml_function_coverage=1 00:13:14.194 --rc genhtml_legend=1 00:13:14.194 --rc geninfo_all_blocks=1 00:13:14.194 --rc geninfo_unexecuted_blocks=1 00:13:14.194 00:13:14.194 ' 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:14.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.194 --rc genhtml_branch_coverage=1 00:13:14.194 --rc genhtml_function_coverage=1 00:13:14.194 --rc genhtml_legend=1 00:13:14.194 --rc geninfo_all_blocks=1 00:13:14.194 --rc geninfo_unexecuted_blocks=1 00:13:14.194 00:13:14.194 ' 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:14.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:14.194 07:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:20.758 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:20.759 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:20.759 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:20.759 Found net devices under 0000:86:00.0: cvl_0_0 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:20.759 Found net devices under 0000:86:00.1: cvl_0_1 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:20.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:13:20.759 00:13:20.759 --- 10.0.0.2 ping statistics --- 00:13:20.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.759 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:13:20.759 00:13:20.759 --- 10.0.0.1 ping statistics --- 00:13:20.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.759 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2399766 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2399766 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2399766 ']' 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.759 07:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.759 [2024-11-27 07:56:14.022118] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:13:20.759 [2024-11-27 07:56:14.022165] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.759 [2024-11-27 07:56:14.089866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.759 [2024-11-27 07:56:14.132852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.759 [2024-11-27 07:56:14.132887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.760 [2024-11-27 07:56:14.132895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.760 [2024-11-27 07:56:14.132902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.760 [2024-11-27 07:56:14.132907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.760 [2024-11-27 07:56:14.133453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.760 [2024-11-27 07:56:14.266089] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.760 [2024-11-27 07:56:14.282271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.760 NULL1 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.760 07:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:20.760 [2024-11-27 07:56:14.335121] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:13:20.760 [2024-11-27 07:56:14.335152] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2399943 ] 00:13:20.760 Attached to nqn.2016-06.io.spdk:cnode1 00:13:20.760 Namespace ID: 1 size: 1GB 00:13:20.760 fused_ordering(0) 00:13:20.760 fused_ordering(1) 00:13:20.760 fused_ordering(2) 00:13:20.760 fused_ordering(3) 00:13:20.760 fused_ordering(4) 00:13:20.760 fused_ordering(5) 00:13:20.760 fused_ordering(6) 00:13:20.760 fused_ordering(7) 00:13:20.760 fused_ordering(8) 00:13:20.760 fused_ordering(9) 00:13:20.760 fused_ordering(10) 00:13:20.760 fused_ordering(11) 00:13:20.760 fused_ordering(12) 00:13:20.760 fused_ordering(13) 00:13:20.760 fused_ordering(14) 00:13:20.760 fused_ordering(15) 00:13:20.760 fused_ordering(16) 00:13:20.760 fused_ordering(17) 00:13:20.760 fused_ordering(18) 00:13:20.760 fused_ordering(19) 00:13:20.760 fused_ordering(20) 00:13:20.760 fused_ordering(21) 00:13:20.760 fused_ordering(22) 00:13:20.760 fused_ordering(23) 00:13:20.760 fused_ordering(24) 00:13:20.760 fused_ordering(25) 00:13:20.760 fused_ordering(26) 00:13:20.760 fused_ordering(27) 00:13:20.760 fused_ordering(28) 00:13:20.760 fused_ordering(29) 00:13:20.760 fused_ordering(30) 00:13:20.760 fused_ordering(31) 00:13:20.760 fused_ordering(32) 00:13:20.760 fused_ordering(33) 00:13:20.760 fused_ordering(34) 00:13:20.760 fused_ordering(35) 00:13:20.760 fused_ordering(36) 00:13:20.760 fused_ordering(37) 00:13:20.760 fused_ordering(38) 00:13:20.760 fused_ordering(39) 00:13:20.760 fused_ordering(40) 00:13:20.760 fused_ordering(41) 00:13:20.760 fused_ordering(42) 00:13:20.760 fused_ordering(43) 00:13:20.760 fused_ordering(44) 00:13:20.760 fused_ordering(45) 00:13:20.760 fused_ordering(46) 00:13:20.760 fused_ordering(47) 00:13:20.760 fused_ordering(48) 00:13:20.760 fused_ordering(49) 00:13:20.760 fused_ordering(50) 00:13:20.760 fused_ordering(51) 00:13:20.760 fused_ordering(52) 00:13:20.760 fused_ordering(53) 00:13:20.760 fused_ordering(54) 00:13:20.760 fused_ordering(55) 00:13:20.760 fused_ordering(56) 00:13:20.760 fused_ordering(57) 00:13:20.760 fused_ordering(58) 00:13:20.760 fused_ordering(59) 00:13:20.760 fused_ordering(60) 00:13:20.760 fused_ordering(61) 00:13:20.760 fused_ordering(62) 00:13:20.760 fused_ordering(63) 00:13:20.760 fused_ordering(64) 00:13:20.760 fused_ordering(65) 00:13:20.760 fused_ordering(66) 00:13:20.760 fused_ordering(67) 00:13:20.760 fused_ordering(68) 00:13:20.760 fused_ordering(69) 00:13:20.760 fused_ordering(70) 00:13:20.760 fused_ordering(71) 00:13:20.760 fused_ordering(72) 00:13:20.760 fused_ordering(73) 00:13:20.760 fused_ordering(74) 00:13:20.760 fused_ordering(75) 00:13:20.760 fused_ordering(76) 00:13:20.760 fused_ordering(77) 00:13:20.760 fused_ordering(78) 00:13:20.760 fused_ordering(79) 00:13:20.760 fused_ordering(80) 00:13:20.760 fused_ordering(81) 00:13:20.760 fused_ordering(82) 00:13:20.760 fused_ordering(83) 00:13:20.760 fused_ordering(84) 00:13:20.760 fused_ordering(85) 00:13:20.760 fused_ordering(86) 00:13:20.760 fused_ordering(87) 00:13:20.760 fused_ordering(88) 00:13:20.760 fused_ordering(89) 00:13:20.760 fused_ordering(90) 00:13:20.760 fused_ordering(91) 00:13:20.760 fused_ordering(92) 00:13:20.760 fused_ordering(93) 00:13:20.760 fused_ordering(94) 00:13:20.760 fused_ordering(95) 00:13:20.760 fused_ordering(96) 00:13:20.760 fused_ordering(97) 00:13:20.760 fused_ordering(98) 00:13:20.760 fused_ordering(99) 00:13:20.760 fused_ordering(100) 00:13:20.760 fused_ordering(101) 00:13:20.760 fused_ordering(102) 00:13:20.760 fused_ordering(103) 00:13:20.760 fused_ordering(104) 00:13:20.760 fused_ordering(105) 00:13:20.760 fused_ordering(106) 00:13:20.760 fused_ordering(107) 00:13:20.760 fused_ordering(108) 00:13:20.760 fused_ordering(109) 00:13:20.760 fused_ordering(110) 00:13:20.760 fused_ordering(111) 00:13:20.760 fused_ordering(112) 00:13:20.760 fused_ordering(113) 00:13:20.760 fused_ordering(114) 00:13:20.760 fused_ordering(115) 00:13:20.760 fused_ordering(116) 00:13:20.760 fused_ordering(117) 00:13:20.760 fused_ordering(118) 00:13:20.760 fused_ordering(119) 00:13:20.760 fused_ordering(120) 00:13:20.760 fused_ordering(121) 00:13:20.760 fused_ordering(122) 00:13:20.760 fused_ordering(123) 00:13:20.760 fused_ordering(124) 00:13:20.760 fused_ordering(125) 00:13:20.760 fused_ordering(126) 00:13:20.760 fused_ordering(127) 00:13:20.760 fused_ordering(128) 00:13:20.760 fused_ordering(129) 00:13:20.760 fused_ordering(130) 00:13:20.760 fused_ordering(131) 00:13:20.760 fused_ordering(132) 00:13:20.760 fused_ordering(133) 00:13:20.760 fused_ordering(134) 00:13:20.760 fused_ordering(135) 00:13:20.760 fused_ordering(136) 00:13:20.760 fused_ordering(137) 00:13:20.760 fused_ordering(138) 00:13:20.760 fused_ordering(139) 00:13:20.760 fused_ordering(140) 00:13:20.760 fused_ordering(141) 00:13:20.760 fused_ordering(142) 00:13:20.760 fused_ordering(143) 00:13:20.760 fused_ordering(144) 00:13:20.760 fused_ordering(145) 00:13:20.760 fused_ordering(146) 00:13:20.760 fused_ordering(147) 00:13:20.760 fused_ordering(148) 00:13:20.760 fused_ordering(149) 00:13:20.760 fused_ordering(150) 00:13:20.760 fused_ordering(151) 00:13:20.760 fused_ordering(152) 00:13:20.760 fused_ordering(153) 00:13:20.760 fused_ordering(154) 00:13:20.760 fused_ordering(155) 00:13:20.760 fused_ordering(156) 00:13:20.760 fused_ordering(157) 00:13:20.760 fused_ordering(158) 00:13:20.760 fused_ordering(159) 00:13:20.760 fused_ordering(160) 00:13:20.760 fused_ordering(161) 00:13:20.760 fused_ordering(162) 00:13:20.760 fused_ordering(163) 00:13:20.760 fused_ordering(164) 00:13:20.761 fused_ordering(165) 00:13:20.761 fused_ordering(166) 00:13:20.761 fused_ordering(167) 00:13:20.761 fused_ordering(168) 00:13:20.761 fused_ordering(169) 00:13:20.761 fused_ordering(170) 00:13:20.761 fused_ordering(171) 00:13:20.761 fused_ordering(172) 00:13:20.761 fused_ordering(173) 00:13:20.761 fused_ordering(174) 00:13:20.761 fused_ordering(175) 00:13:20.761 fused_ordering(176) 00:13:20.761 fused_ordering(177) 00:13:20.761 fused_ordering(178) 00:13:20.761 fused_ordering(179) 00:13:20.761 fused_ordering(180) 00:13:20.761 fused_ordering(181) 00:13:20.761 fused_ordering(182) 00:13:20.761 fused_ordering(183) 00:13:20.761 fused_ordering(184) 00:13:20.761 fused_ordering(185) 00:13:20.761 fused_ordering(186) 00:13:20.761 fused_ordering(187) 00:13:20.761 fused_ordering(188) 00:13:20.761 fused_ordering(189) 00:13:20.761 fused_ordering(190) 00:13:20.761 fused_ordering(191) 00:13:20.761 fused_ordering(192) 00:13:20.761 fused_ordering(193) 00:13:20.761 fused_ordering(194) 00:13:20.761 fused_ordering(195) 00:13:20.761 fused_ordering(196) 00:13:20.761 fused_ordering(197) 00:13:20.761 fused_ordering(198) 00:13:20.761 fused_ordering(199) 00:13:20.761 fused_ordering(200) 00:13:20.761 fused_ordering(201) 00:13:20.761 fused_ordering(202) 00:13:20.761 fused_ordering(203) 00:13:20.761 fused_ordering(204) 00:13:20.761 fused_ordering(205) 00:13:21.020 fused_ordering(206) 00:13:21.020 fused_ordering(207) 00:13:21.020 fused_ordering(208) 00:13:21.020 fused_ordering(209) 00:13:21.020 fused_ordering(210) 00:13:21.020 fused_ordering(211) 00:13:21.020 fused_ordering(212) 00:13:21.020 fused_ordering(213) 00:13:21.020 fused_ordering(214) 00:13:21.020 fused_ordering(215) 00:13:21.020 fused_ordering(216) 00:13:21.020 fused_ordering(217) 00:13:21.020 fused_ordering(218) 00:13:21.020 fused_ordering(219) 00:13:21.020 fused_ordering(220) 00:13:21.020 fused_ordering(221) 00:13:21.020 fused_ordering(222) 00:13:21.020 fused_ordering(223) 00:13:21.020 fused_ordering(224) 00:13:21.020 fused_ordering(225) 00:13:21.020 fused_ordering(226) 00:13:21.020 fused_ordering(227) 00:13:21.020 fused_ordering(228) 00:13:21.020 fused_ordering(229) 00:13:21.020 fused_ordering(230) 00:13:21.020 fused_ordering(231) 00:13:21.020 fused_ordering(232) 00:13:21.020 fused_ordering(233) 00:13:21.020 fused_ordering(234) 00:13:21.020 fused_ordering(235) 00:13:21.020 fused_ordering(236) 00:13:21.020 fused_ordering(237) 00:13:21.020 fused_ordering(238) 00:13:21.020 fused_ordering(239) 00:13:21.020 fused_ordering(240) 00:13:21.020 fused_ordering(241) 00:13:21.020 fused_ordering(242) 00:13:21.020 fused_ordering(243) 00:13:21.020 fused_ordering(244) 00:13:21.020 fused_ordering(245) 00:13:21.020 fused_ordering(246) 00:13:21.020 fused_ordering(247) 00:13:21.020 fused_ordering(248) 00:13:21.020 fused_ordering(249) 00:13:21.020 fused_ordering(250) 00:13:21.020 fused_ordering(251) 00:13:21.020 fused_ordering(252) 00:13:21.020 fused_ordering(253) 00:13:21.020 fused_ordering(254) 00:13:21.020 fused_ordering(255) 00:13:21.020 fused_ordering(256) 00:13:21.020 fused_ordering(257) 00:13:21.020 fused_ordering(258) 00:13:21.020 fused_ordering(259) 00:13:21.020 fused_ordering(260) 00:13:21.020 fused_ordering(261) 00:13:21.020 fused_ordering(262) 00:13:21.020 fused_ordering(263) 00:13:21.020 fused_ordering(264) 00:13:21.020 fused_ordering(265) 00:13:21.020 fused_ordering(266) 00:13:21.020 fused_ordering(267) 00:13:21.020 fused_ordering(268) 00:13:21.020 fused_ordering(269) 00:13:21.020 fused_ordering(270) 00:13:21.020 fused_ordering(271) 00:13:21.020 fused_ordering(272) 00:13:21.020 fused_ordering(273) 00:13:21.020 fused_ordering(274) 00:13:21.020 fused_ordering(275) 00:13:21.020 fused_ordering(276) 00:13:21.020 fused_ordering(277) 00:13:21.020 fused_ordering(278) 00:13:21.020 fused_ordering(279) 00:13:21.020 fused_ordering(280) 00:13:21.020 fused_ordering(281) 00:13:21.020 fused_ordering(282) 00:13:21.020 fused_ordering(283) 00:13:21.020 fused_ordering(284) 00:13:21.020 fused_ordering(285) 00:13:21.020 fused_ordering(286) 00:13:21.020 fused_ordering(287) 00:13:21.020 fused_ordering(288) 00:13:21.020 fused_ordering(289) 00:13:21.020 fused_ordering(290) 00:13:21.020 fused_ordering(291) 00:13:21.020 fused_ordering(292) 00:13:21.020 fused_ordering(293) 00:13:21.020 fused_ordering(294) 00:13:21.020 fused_ordering(295) 00:13:21.020 fused_ordering(296) 00:13:21.020 fused_ordering(297) 00:13:21.020 fused_ordering(298) 00:13:21.020 fused_ordering(299) 00:13:21.020 fused_ordering(300) 00:13:21.020 fused_ordering(301) 00:13:21.020 fused_ordering(302) 00:13:21.020 fused_ordering(303) 00:13:21.020 fused_ordering(304) 00:13:21.020 fused_ordering(305) 00:13:21.020 fused_ordering(306) 00:13:21.020 fused_ordering(307) 00:13:21.020 fused_ordering(308) 00:13:21.020 fused_ordering(309) 00:13:21.020 fused_ordering(310) 00:13:21.020 fused_ordering(311) 00:13:21.020 fused_ordering(312) 00:13:21.020 fused_ordering(313) 00:13:21.020 fused_ordering(314) 00:13:21.020 fused_ordering(315) 00:13:21.020 fused_ordering(316) 00:13:21.020 fused_ordering(317) 00:13:21.020 fused_ordering(318) 00:13:21.020 fused_ordering(319) 00:13:21.020 fused_ordering(320) 00:13:21.020 fused_ordering(321) 00:13:21.020 fused_ordering(322) 00:13:21.020 fused_ordering(323) 00:13:21.020 fused_ordering(324) 00:13:21.020 fused_ordering(325) 00:13:21.020 fused_ordering(326) 00:13:21.020 fused_ordering(327) 00:13:21.020 fused_ordering(328) 00:13:21.020 fused_ordering(329) 00:13:21.020 fused_ordering(330) 00:13:21.020 fused_ordering(331) 00:13:21.020 fused_ordering(332) 00:13:21.020 fused_ordering(333) 00:13:21.020 fused_ordering(334) 00:13:21.020 fused_ordering(335) 00:13:21.020 fused_ordering(336) 00:13:21.020 fused_ordering(337) 00:13:21.020 fused_ordering(338) 00:13:21.020 fused_ordering(339) 00:13:21.020 fused_ordering(340) 00:13:21.020 fused_ordering(341) 00:13:21.020 fused_ordering(342) 00:13:21.020 fused_ordering(343) 00:13:21.020 fused_ordering(344) 00:13:21.020 fused_ordering(345) 00:13:21.020 fused_ordering(346) 00:13:21.020 fused_ordering(347) 00:13:21.020 fused_ordering(348) 00:13:21.020 fused_ordering(349) 00:13:21.020 fused_ordering(350) 00:13:21.020 fused_ordering(351) 00:13:21.020 fused_ordering(352) 00:13:21.020 fused_ordering(353) 00:13:21.020 fused_ordering(354) 00:13:21.020 fused_ordering(355) 00:13:21.020 fused_ordering(356) 00:13:21.020 fused_ordering(357) 00:13:21.020 fused_ordering(358) 00:13:21.020 fused_ordering(359) 00:13:21.020 fused_ordering(360) 00:13:21.020 fused_ordering(361) 00:13:21.020 fused_ordering(362) 00:13:21.020 fused_ordering(363) 00:13:21.020 fused_ordering(364) 00:13:21.020 fused_ordering(365) 00:13:21.020 fused_ordering(366) 00:13:21.020 fused_ordering(367) 00:13:21.020 fused_ordering(368) 00:13:21.020 fused_ordering(369) 00:13:21.020 fused_ordering(370) 00:13:21.020 fused_ordering(371) 00:13:21.020 fused_ordering(372) 00:13:21.020 fused_ordering(373) 00:13:21.020 fused_ordering(374) 00:13:21.020 fused_ordering(375) 00:13:21.020 fused_ordering(376) 00:13:21.020 fused_ordering(377) 00:13:21.020 fused_ordering(378) 00:13:21.020 fused_ordering(379) 00:13:21.020 fused_ordering(380) 00:13:21.020 fused_ordering(381) 00:13:21.020 fused_ordering(382) 00:13:21.020 fused_ordering(383) 00:13:21.020 fused_ordering(384) 00:13:21.020 fused_ordering(385) 00:13:21.020 fused_ordering(386) 00:13:21.020 fused_ordering(387) 00:13:21.020 fused_ordering(388) 00:13:21.020 fused_ordering(389) 00:13:21.020 fused_ordering(390) 00:13:21.020 fused_ordering(391) 00:13:21.020 fused_ordering(392) 00:13:21.020 fused_ordering(393) 00:13:21.020 fused_ordering(394) 00:13:21.020 fused_ordering(395) 00:13:21.020 fused_ordering(396) 00:13:21.020 fused_ordering(397) 00:13:21.020 fused_ordering(398) 00:13:21.020 fused_ordering(399) 00:13:21.020 fused_ordering(400) 00:13:21.020 fused_ordering(401) 00:13:21.020 fused_ordering(402) 00:13:21.020 fused_ordering(403) 00:13:21.020 fused_ordering(404) 00:13:21.020 fused_ordering(405) 00:13:21.020 fused_ordering(406) 00:13:21.020 fused_ordering(407) 00:13:21.020 fused_ordering(408) 00:13:21.020 fused_ordering(409) 00:13:21.020 fused_ordering(410) 00:13:21.279 fused_ordering(411) 00:13:21.279 fused_ordering(412) 00:13:21.279 fused_ordering(413) 00:13:21.279 fused_ordering(414) 00:13:21.279 fused_ordering(415) 00:13:21.279 fused_ordering(416) 00:13:21.279 fused_ordering(417) 00:13:21.279 fused_ordering(418) 00:13:21.279 fused_ordering(419) 00:13:21.279 fused_ordering(420) 00:13:21.279 fused_ordering(421) 00:13:21.279 fused_ordering(422) 00:13:21.279 fused_ordering(423) 00:13:21.279 fused_ordering(424) 00:13:21.279 fused_ordering(425) 00:13:21.279 fused_ordering(426) 00:13:21.279 fused_ordering(427) 00:13:21.279 fused_ordering(428) 00:13:21.279 fused_ordering(429) 00:13:21.279 fused_ordering(430) 00:13:21.279 fused_ordering(431) 00:13:21.279 fused_ordering(432) 00:13:21.279 fused_ordering(433) 00:13:21.279 fused_ordering(434) 00:13:21.279 fused_ordering(435) 00:13:21.279 fused_ordering(436) 00:13:21.279 fused_ordering(437) 00:13:21.279 fused_ordering(438) 00:13:21.279 fused_ordering(439) 00:13:21.279 fused_ordering(440) 00:13:21.279 fused_ordering(441) 00:13:21.279 fused_ordering(442) 00:13:21.279 fused_ordering(443) 00:13:21.279 fused_ordering(444) 00:13:21.279 fused_ordering(445) 00:13:21.279 fused_ordering(446) 00:13:21.279 fused_ordering(447) 00:13:21.279 fused_ordering(448) 00:13:21.279 fused_ordering(449) 00:13:21.279 fused_ordering(450) 00:13:21.279 fused_ordering(451) 00:13:21.279 fused_ordering(452) 00:13:21.279 fused_ordering(453) 00:13:21.279 fused_ordering(454) 00:13:21.279 fused_ordering(455) 00:13:21.279 fused_ordering(456) 00:13:21.279 fused_ordering(457) 00:13:21.279 fused_ordering(458) 00:13:21.279 fused_ordering(459) 00:13:21.279 fused_ordering(460) 00:13:21.279 fused_ordering(461) 00:13:21.279 fused_ordering(462) 00:13:21.279 fused_ordering(463) 00:13:21.279 fused_ordering(464) 00:13:21.279 fused_ordering(465) 00:13:21.279 fused_ordering(466) 00:13:21.279 fused_ordering(467) 00:13:21.279 fused_ordering(468) 00:13:21.279 fused_ordering(469) 00:13:21.279 fused_ordering(470) 00:13:21.279 fused_ordering(471) 00:13:21.279 fused_ordering(472) 00:13:21.279 fused_ordering(473) 00:13:21.279 fused_ordering(474) 00:13:21.279 fused_ordering(475) 00:13:21.279 fused_ordering(476) 00:13:21.279 fused_ordering(477) 00:13:21.279 fused_ordering(478) 00:13:21.279 fused_ordering(479) 00:13:21.279 fused_ordering(480) 00:13:21.279 fused_ordering(481) 00:13:21.279 fused_ordering(482) 00:13:21.279 fused_ordering(483) 00:13:21.279 fused_ordering(484) 00:13:21.279 fused_ordering(485) 00:13:21.279 fused_ordering(486) 00:13:21.279 fused_ordering(487) 00:13:21.279 fused_ordering(488) 00:13:21.279 fused_ordering(489) 00:13:21.279 fused_ordering(490) 00:13:21.279 fused_ordering(491) 00:13:21.279 fused_ordering(492) 00:13:21.279 fused_ordering(493) 00:13:21.279 fused_ordering(494) 00:13:21.279 fused_ordering(495) 00:13:21.279 fused_ordering(496) 00:13:21.279 fused_ordering(497) 00:13:21.279 fused_ordering(498) 00:13:21.279 fused_ordering(499) 00:13:21.279 fused_ordering(500) 00:13:21.279 fused_ordering(501) 00:13:21.279 fused_ordering(502) 00:13:21.279 fused_ordering(503) 00:13:21.279 fused_ordering(504) 00:13:21.279 fused_ordering(505) 00:13:21.279 fused_ordering(506) 00:13:21.279 fused_ordering(507) 00:13:21.279 fused_ordering(508) 00:13:21.280 fused_ordering(509) 00:13:21.280 fused_ordering(510) 00:13:21.280 fused_ordering(511) 00:13:21.280 fused_ordering(512) 00:13:21.280 fused_ordering(513) 00:13:21.280 fused_ordering(514) 00:13:21.280 fused_ordering(515) 00:13:21.280 fused_ordering(516) 00:13:21.280 fused_ordering(517) 00:13:21.280 fused_ordering(518) 00:13:21.280 fused_ordering(519) 00:13:21.280 fused_ordering(520) 00:13:21.280 fused_ordering(521) 00:13:21.280 fused_ordering(522) 00:13:21.280 fused_ordering(523) 00:13:21.280 fused_ordering(524) 00:13:21.280 fused_ordering(525) 00:13:21.280 fused_ordering(526) 00:13:21.280 fused_ordering(527) 00:13:21.280 fused_ordering(528) 00:13:21.280 fused_ordering(529) 00:13:21.280 fused_ordering(530) 00:13:21.280 fused_ordering(531) 00:13:21.280 fused_ordering(532) 00:13:21.280 fused_ordering(533) 00:13:21.280 fused_ordering(534) 00:13:21.280 fused_ordering(535) 00:13:21.280 fused_ordering(536) 00:13:21.280 fused_ordering(537) 00:13:21.280 fused_ordering(538) 00:13:21.280 fused_ordering(539) 00:13:21.280 fused_ordering(540) 00:13:21.280 fused_ordering(541) 00:13:21.280 fused_ordering(542) 00:13:21.280 fused_ordering(543) 00:13:21.280 fused_ordering(544) 00:13:21.280 fused_ordering(545) 00:13:21.280 fused_ordering(546) 00:13:21.280 fused_ordering(547) 00:13:21.280 fused_ordering(548) 00:13:21.280 fused_ordering(549) 00:13:21.280 fused_ordering(550) 00:13:21.280 fused_ordering(551) 00:13:21.280 fused_ordering(552) 00:13:21.280 fused_ordering(553) 00:13:21.280 fused_ordering(554) 00:13:21.280 fused_ordering(555) 00:13:21.280 fused_ordering(556) 00:13:21.280 fused_ordering(557) 00:13:21.280 fused_ordering(558) 00:13:21.280 fused_ordering(559) 00:13:21.280 fused_ordering(560) 00:13:21.280 fused_ordering(561) 00:13:21.280 fused_ordering(562) 00:13:21.280 fused_ordering(563) 00:13:21.280 fused_ordering(564) 00:13:21.280 fused_ordering(565) 00:13:21.280 fused_ordering(566) 00:13:21.280 fused_ordering(567) 00:13:21.280 fused_ordering(568) 00:13:21.280 fused_ordering(569) 00:13:21.280 fused_ordering(570) 00:13:21.280 fused_ordering(571) 00:13:21.280 fused_ordering(572) 00:13:21.280 fused_ordering(573) 00:13:21.280 fused_ordering(574) 00:13:21.280 fused_ordering(575) 00:13:21.280 fused_ordering(576) 00:13:21.280 fused_ordering(577) 00:13:21.280 fused_ordering(578) 00:13:21.280 fused_ordering(579) 00:13:21.280 fused_ordering(580) 00:13:21.280 fused_ordering(581) 00:13:21.280 fused_ordering(582) 00:13:21.280 fused_ordering(583) 00:13:21.280 fused_ordering(584) 00:13:21.280 fused_ordering(585) 00:13:21.280 fused_ordering(586) 00:13:21.280 fused_ordering(587) 00:13:21.280 fused_ordering(588) 00:13:21.280 fused_ordering(589) 00:13:21.280 fused_ordering(590) 00:13:21.280 fused_ordering(591) 00:13:21.280 fused_ordering(592) 00:13:21.280 fused_ordering(593) 00:13:21.280 fused_ordering(594) 00:13:21.280 fused_ordering(595) 00:13:21.280 fused_ordering(596) 00:13:21.280 fused_ordering(597) 00:13:21.280 fused_ordering(598) 00:13:21.280 fused_ordering(599) 00:13:21.280 fused_ordering(600) 00:13:21.280 fused_ordering(601) 00:13:21.280 fused_ordering(602) 00:13:21.280 fused_ordering(603) 00:13:21.280 fused_ordering(604) 00:13:21.280 fused_ordering(605) 00:13:21.280 fused_ordering(606) 00:13:21.280 fused_ordering(607) 00:13:21.280 fused_ordering(608) 00:13:21.280 fused_ordering(609) 00:13:21.280 fused_ordering(610) 00:13:21.280 fused_ordering(611) 00:13:21.280 fused_ordering(612) 00:13:21.280 fused_ordering(613) 00:13:21.280 fused_ordering(614) 00:13:21.280 fused_ordering(615) 00:13:21.538 fused_ordering(616) 00:13:21.538 fused_ordering(617) 00:13:21.538 fused_ordering(618) 00:13:21.538 fused_ordering(619) 00:13:21.538 fused_ordering(620) 00:13:21.538 fused_ordering(621) 00:13:21.538 fused_ordering(622) 00:13:21.538 fused_ordering(623) 00:13:21.538 fused_ordering(624) 00:13:21.538 fused_ordering(625) 00:13:21.538 fused_ordering(626) 00:13:21.538 fused_ordering(627) 00:13:21.538 fused_ordering(628) 00:13:21.538 fused_ordering(629) 00:13:21.538 fused_ordering(630) 00:13:21.538 fused_ordering(631) 00:13:21.538 fused_ordering(632) 00:13:21.538 fused_ordering(633) 00:13:21.538 fused_ordering(634) 00:13:21.538 fused_ordering(635) 00:13:21.538 fused_ordering(636) 00:13:21.538 fused_ordering(637) 00:13:21.538 fused_ordering(638) 00:13:21.538 fused_ordering(639) 00:13:21.538 fused_ordering(640) 00:13:21.538 fused_ordering(641) 00:13:21.538 fused_ordering(642) 00:13:21.538 fused_ordering(643) 00:13:21.538 fused_ordering(644) 00:13:21.538 fused_ordering(645) 00:13:21.538 fused_ordering(646) 00:13:21.538 fused_ordering(647) 00:13:21.538 fused_ordering(648) 00:13:21.538 fused_ordering(649) 00:13:21.538 fused_ordering(650) 00:13:21.538 fused_ordering(651) 00:13:21.538 fused_ordering(652) 00:13:21.538 fused_ordering(653) 00:13:21.538 fused_ordering(654) 00:13:21.538 fused_ordering(655) 00:13:21.538 fused_ordering(656) 00:13:21.538 fused_ordering(657) 00:13:21.538 fused_ordering(658) 00:13:21.538 fused_ordering(659) 00:13:21.538 fused_ordering(660) 00:13:21.538 fused_ordering(661) 00:13:21.538 fused_ordering(662) 00:13:21.538 fused_ordering(663) 00:13:21.538 fused_ordering(664) 00:13:21.538 fused_ordering(665) 00:13:21.538 fused_ordering(666) 00:13:21.538 fused_ordering(667) 00:13:21.538 fused_ordering(668) 00:13:21.538 fused_ordering(669) 00:13:21.538 fused_ordering(670) 00:13:21.538 fused_ordering(671) 00:13:21.538 fused_ordering(672) 00:13:21.538 fused_ordering(673) 00:13:21.538 fused_ordering(674) 00:13:21.538 fused_ordering(675) 00:13:21.538 fused_ordering(676) 00:13:21.538 fused_ordering(677) 00:13:21.538 fused_ordering(678) 00:13:21.538 fused_ordering(679) 00:13:21.538 fused_ordering(680) 00:13:21.538 fused_ordering(681) 00:13:21.538 fused_ordering(682) 00:13:21.538 fused_ordering(683) 00:13:21.538 fused_ordering(684) 00:13:21.538 fused_ordering(685) 00:13:21.538 fused_ordering(686) 00:13:21.538 fused_ordering(687) 00:13:21.538 fused_ordering(688) 00:13:21.538 fused_ordering(689) 00:13:21.538 fused_ordering(690) 00:13:21.538 fused_ordering(691) 00:13:21.538 fused_ordering(692) 00:13:21.538 fused_ordering(693) 00:13:21.538 fused_ordering(694) 00:13:21.538 fused_ordering(695) 00:13:21.538 fused_ordering(696) 00:13:21.538 fused_ordering(697) 00:13:21.538 fused_ordering(698) 00:13:21.538 fused_ordering(699) 00:13:21.538 fused_ordering(700) 00:13:21.538 fused_ordering(701) 00:13:21.538 fused_ordering(702) 00:13:21.538 fused_ordering(703) 00:13:21.538 fused_ordering(704) 00:13:21.538 fused_ordering(705) 00:13:21.538 fused_ordering(706) 00:13:21.538 fused_ordering(707) 00:13:21.538 fused_ordering(708) 00:13:21.538 fused_ordering(709) 00:13:21.538 fused_ordering(710) 00:13:21.538 fused_ordering(711) 00:13:21.538 fused_ordering(712) 00:13:21.538 fused_ordering(713) 00:13:21.538 fused_ordering(714) 00:13:21.538 fused_ordering(715) 00:13:21.538 fused_ordering(716) 00:13:21.538 fused_ordering(717) 00:13:21.538 fused_ordering(718) 00:13:21.538 fused_ordering(719) 00:13:21.538 fused_ordering(720) 00:13:21.538 fused_ordering(721) 00:13:21.538 fused_ordering(722) 00:13:21.538 fused_ordering(723) 00:13:21.538 fused_ordering(724) 00:13:21.538 fused_ordering(725) 00:13:21.538 fused_ordering(726) 00:13:21.538 fused_ordering(727) 00:13:21.538 fused_ordering(728) 00:13:21.538 fused_ordering(729) 00:13:21.538 fused_ordering(730) 00:13:21.538 fused_ordering(731) 00:13:21.538 fused_ordering(732) 00:13:21.539 fused_ordering(733) 00:13:21.539 fused_ordering(734) 00:13:21.539 fused_ordering(735) 00:13:21.539 fused_ordering(736) 00:13:21.539 fused_ordering(737) 00:13:21.539 fused_ordering(738) 00:13:21.539 fused_ordering(739) 00:13:21.539 fused_ordering(740) 00:13:21.539 fused_ordering(741) 00:13:21.539 fused_ordering(742) 00:13:21.539 fused_ordering(743) 00:13:21.539 fused_ordering(744) 00:13:21.539 fused_ordering(745) 00:13:21.539 fused_ordering(746) 00:13:21.539 fused_ordering(747) 00:13:21.539 fused_ordering(748) 00:13:21.539 fused_ordering(749) 00:13:21.539 fused_ordering(750) 00:13:21.539 fused_ordering(751) 00:13:21.539 fused_ordering(752) 00:13:21.539 fused_ordering(753) 00:13:21.539 fused_ordering(754) 00:13:21.539 fused_ordering(755) 00:13:21.539 fused_ordering(756) 00:13:21.539 fused_ordering(757) 00:13:21.539 fused_ordering(758) 00:13:21.539 fused_ordering(759) 00:13:21.539 fused_ordering(760) 00:13:21.539 fused_ordering(761) 00:13:21.539 fused_ordering(762) 00:13:21.539 fused_ordering(763) 00:13:21.539 fused_ordering(764) 00:13:21.539 fused_ordering(765) 00:13:21.539 fused_ordering(766) 00:13:21.539 fused_ordering(767) 00:13:21.539 fused_ordering(768) 00:13:21.539 fused_ordering(769) 00:13:21.539 fused_ordering(770) 00:13:21.539 fused_ordering(771) 00:13:21.539 fused_ordering(772) 00:13:21.539 fused_ordering(773) 00:13:21.539 fused_ordering(774) 00:13:21.539 fused_ordering(775) 00:13:21.539 fused_ordering(776) 00:13:21.539 fused_ordering(777) 00:13:21.539 fused_ordering(778) 00:13:21.539 fused_ordering(779) 00:13:21.539 fused_ordering(780) 00:13:21.539 fused_ordering(781) 00:13:21.539 fused_ordering(782) 00:13:21.539 fused_ordering(783) 00:13:21.539 fused_ordering(784) 00:13:21.539 fused_ordering(785) 00:13:21.539 fused_ordering(786) 00:13:21.539 fused_ordering(787) 00:13:21.539 fused_ordering(788) 00:13:21.539 fused_ordering(789) 00:13:21.539 fused_ordering(790) 00:13:21.539 fused_ordering(791) 00:13:21.539 fused_ordering(792) 00:13:21.539 fused_ordering(793) 00:13:21.539 fused_ordering(794) 00:13:21.539 fused_ordering(795) 00:13:21.539 fused_ordering(796) 00:13:21.539 fused_ordering(797) 00:13:21.539 fused_ordering(798) 00:13:21.539 fused_ordering(799) 00:13:21.539 fused_ordering(800) 00:13:21.539 fused_ordering(801) 00:13:21.539 fused_ordering(802) 00:13:21.539 fused_ordering(803) 00:13:21.539 fused_ordering(804) 00:13:21.539 fused_ordering(805) 00:13:21.539 fused_ordering(806) 00:13:21.539 fused_ordering(807) 00:13:21.539 fused_ordering(808) 00:13:21.539 fused_ordering(809) 00:13:21.539 fused_ordering(810) 00:13:21.539 fused_ordering(811) 00:13:21.539 fused_ordering(812) 00:13:21.539 fused_ordering(813) 00:13:21.539 fused_ordering(814) 00:13:21.539 fused_ordering(815) 00:13:21.539 fused_ordering(816) 00:13:21.539 fused_ordering(817) 00:13:21.539 fused_ordering(818) 00:13:21.539 fused_ordering(819) 00:13:21.539 fused_ordering(820) 00:13:22.107 fused_ordering(821) 00:13:22.107 fused_ordering(822) 00:13:22.107 fused_ordering(823) 00:13:22.107 fused_ordering(824) 00:13:22.107 fused_ordering(825) 00:13:22.107 fused_ordering(826) 00:13:22.107 fused_ordering(827) 00:13:22.107 fused_ordering(828) 00:13:22.107 fused_ordering(829) 00:13:22.107 fused_ordering(830) 00:13:22.107 fused_ordering(831) 00:13:22.107 fused_ordering(832) 00:13:22.107 fused_ordering(833) 00:13:22.107 fused_ordering(834) 00:13:22.107 fused_ordering(835) 00:13:22.107 fused_ordering(836) 00:13:22.107 fused_ordering(837) 00:13:22.107 fused_ordering(838) 00:13:22.107 fused_ordering(839) 00:13:22.107 fused_ordering(840) 00:13:22.107 fused_ordering(841) 00:13:22.107 fused_ordering(842) 00:13:22.107 fused_ordering(843) 00:13:22.107 fused_ordering(844) 00:13:22.107 fused_ordering(845) 00:13:22.107 fused_ordering(846) 00:13:22.107 fused_ordering(847) 00:13:22.107 fused_ordering(848) 00:13:22.107 fused_ordering(849) 00:13:22.107 fused_ordering(850) 00:13:22.107 fused_ordering(851) 00:13:22.107 fused_ordering(852) 00:13:22.107 fused_ordering(853) 00:13:22.107 fused_ordering(854) 00:13:22.107 fused_ordering(855) 00:13:22.107 fused_ordering(856) 00:13:22.107 fused_ordering(857) 00:13:22.107 fused_ordering(858) 00:13:22.107 fused_ordering(859) 00:13:22.107 fused_ordering(860) 00:13:22.107 fused_ordering(861) 00:13:22.107 fused_ordering(862) 00:13:22.107 fused_ordering(863) 00:13:22.107 fused_ordering(864) 00:13:22.107 fused_ordering(865) 00:13:22.107 fused_ordering(866) 00:13:22.107 fused_ordering(867) 00:13:22.107 fused_ordering(868) 00:13:22.107 fused_ordering(869) 00:13:22.107 fused_ordering(870) 00:13:22.107 fused_ordering(871) 00:13:22.107 fused_ordering(872) 00:13:22.107 fused_ordering(873) 00:13:22.107 fused_ordering(874) 00:13:22.107 fused_ordering(875) 00:13:22.107 fused_ordering(876) 00:13:22.107 fused_ordering(877) 00:13:22.107 fused_ordering(878) 00:13:22.107 fused_ordering(879) 00:13:22.107 fused_ordering(880) 00:13:22.107 fused_ordering(881) 00:13:22.107 fused_ordering(882) 00:13:22.107 fused_ordering(883) 00:13:22.107 fused_ordering(884) 00:13:22.107 fused_ordering(885) 00:13:22.107 fused_ordering(886) 00:13:22.107 fused_ordering(887) 00:13:22.107 fused_ordering(888) 00:13:22.107 fused_ordering(889) 00:13:22.107 fused_ordering(890) 00:13:22.107 fused_ordering(891) 00:13:22.107 fused_ordering(892) 00:13:22.107 fused_ordering(893) 00:13:22.107 fused_ordering(894) 00:13:22.107 fused_ordering(895) 00:13:22.107 fused_ordering(896) 00:13:22.107 fused_ordering(897) 00:13:22.107 fused_ordering(898) 00:13:22.107 fused_ordering(899) 00:13:22.107 fused_ordering(900) 00:13:22.107 fused_ordering(901) 00:13:22.107 fused_ordering(902) 00:13:22.107 fused_ordering(903) 00:13:22.107 fused_ordering(904) 00:13:22.108 fused_ordering(905) 00:13:22.108 fused_ordering(906) 00:13:22.108 fused_ordering(907) 00:13:22.108 fused_ordering(908) 00:13:22.108 fused_ordering(909) 00:13:22.108 fused_ordering(910) 00:13:22.108 fused_ordering(911) 00:13:22.108 fused_ordering(912) 00:13:22.108 fused_ordering(913) 00:13:22.108 fused_ordering(914) 00:13:22.108 fused_ordering(915) 00:13:22.108 fused_ordering(916) 00:13:22.108 fused_ordering(917) 00:13:22.108 fused_ordering(918) 00:13:22.108 fused_ordering(919) 00:13:22.108 fused_ordering(920) 00:13:22.108 fused_ordering(921) 00:13:22.108 fused_ordering(922) 00:13:22.108 fused_ordering(923) 00:13:22.108 fused_ordering(924) 00:13:22.108 fused_ordering(925) 00:13:22.108 fused_ordering(926) 00:13:22.108 fused_ordering(927) 00:13:22.108 fused_ordering(928) 00:13:22.108 fused_ordering(929) 00:13:22.108 fused_ordering(930) 00:13:22.108 fused_ordering(931) 00:13:22.108 fused_ordering(932) 00:13:22.108 fused_ordering(933) 00:13:22.108 fused_ordering(934) 00:13:22.108 fused_ordering(935) 00:13:22.108 fused_ordering(936) 00:13:22.108 fused_ordering(937) 00:13:22.108 fused_ordering(938) 00:13:22.108 fused_ordering(939) 00:13:22.108 fused_ordering(940) 00:13:22.108 fused_ordering(941) 00:13:22.108 fused_ordering(942) 00:13:22.108 fused_ordering(943) 00:13:22.108 fused_ordering(944) 00:13:22.108 fused_ordering(945) 00:13:22.108 fused_ordering(946) 00:13:22.108 fused_ordering(947) 00:13:22.108 fused_ordering(948) 00:13:22.108 fused_ordering(949) 00:13:22.108 fused_ordering(950) 00:13:22.108 fused_ordering(951) 00:13:22.108 fused_ordering(952) 00:13:22.108 fused_ordering(953) 00:13:22.108 fused_ordering(954) 00:13:22.108 fused_ordering(955) 00:13:22.108 fused_ordering(956) 00:13:22.108 fused_ordering(957) 00:13:22.108 fused_ordering(958) 00:13:22.108 fused_ordering(959) 00:13:22.108 fused_ordering(960) 00:13:22.108 fused_ordering(961) 00:13:22.108 fused_ordering(962) 00:13:22.108 fused_ordering(963) 00:13:22.108 fused_ordering(964) 00:13:22.108 fused_ordering(965) 00:13:22.108 fused_ordering(966) 00:13:22.108 fused_ordering(967) 00:13:22.108 fused_ordering(968) 00:13:22.108 fused_ordering(969) 00:13:22.108 fused_ordering(970) 00:13:22.108 fused_ordering(971) 00:13:22.108 fused_ordering(972) 00:13:22.108 fused_ordering(973) 00:13:22.108 fused_ordering(974) 00:13:22.108 fused_ordering(975) 00:13:22.108 fused_ordering(976) 00:13:22.108 fused_ordering(977) 00:13:22.108 fused_ordering(978) 00:13:22.108 fused_ordering(979) 00:13:22.108 fused_ordering(980) 00:13:22.108 fused_ordering(981) 00:13:22.108 fused_ordering(982) 00:13:22.108 fused_ordering(983) 00:13:22.108 fused_ordering(984) 00:13:22.108 fused_ordering(985) 00:13:22.108 fused_ordering(986) 00:13:22.108 fused_ordering(987) 00:13:22.108 fused_ordering(988) 00:13:22.108 fused_ordering(989) 00:13:22.108 fused_ordering(990) 00:13:22.108 fused_ordering(991) 00:13:22.108 fused_ordering(992) 00:13:22.108 fused_ordering(993) 00:13:22.108 fused_ordering(994) 00:13:22.108 fused_ordering(995) 00:13:22.108 fused_ordering(996) 00:13:22.108 fused_ordering(997) 00:13:22.108 fused_ordering(998) 00:13:22.108 fused_ordering(999) 00:13:22.108 fused_ordering(1000) 00:13:22.108 fused_ordering(1001) 00:13:22.108 fused_ordering(1002) 00:13:22.108 fused_ordering(1003) 00:13:22.108 fused_ordering(1004) 00:13:22.108 fused_ordering(1005) 00:13:22.108 fused_ordering(1006) 00:13:22.108 fused_ordering(1007) 00:13:22.108 fused_ordering(1008) 00:13:22.108 fused_ordering(1009) 00:13:22.108 fused_ordering(1010) 00:13:22.108 fused_ordering(1011) 00:13:22.108 fused_ordering(1012) 00:13:22.108 fused_ordering(1013) 00:13:22.108 fused_ordering(1014) 00:13:22.108 fused_ordering(1015) 00:13:22.108 fused_ordering(1016) 00:13:22.108 fused_ordering(1017) 00:13:22.108 fused_ordering(1018) 00:13:22.108 fused_ordering(1019) 00:13:22.108 fused_ordering(1020) 00:13:22.108 fused_ordering(1021) 00:13:22.108 fused_ordering(1022) 00:13:22.108 fused_ordering(1023) 00:13:22.108 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:22.108 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:22.108 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:22.108 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:22.108 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:22.108 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:22.108 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:22.108 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:22.108 rmmod nvme_tcp 00:13:22.108 rmmod nvme_fabrics 00:13:22.108 rmmod nvme_keyring 00:13:22.108 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:22.108 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:22.108 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:22.108 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2399766 ']' 00:13:22.108 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2399766 00:13:22.108 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2399766 ']' 00:13:22.108 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2399766 00:13:22.108 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:22.108 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.108 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2399766 00:13:22.367 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:22.367 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:22.367 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2399766' 00:13:22.367 killing process with pid 2399766 00:13:22.367 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2399766 00:13:22.367 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2399766 00:13:22.367 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:22.367 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:22.367 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:22.367 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:22.367 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:22.367 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:22.367 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:22.367 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:22.367 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:22.367 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.367 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.367 07:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.464 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:24.464 00:13:24.464 real 0m10.410s 00:13:24.464 user 0m4.916s 00:13:24.464 sys 0m5.621s 00:13:24.464 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.464 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.464 ************************************ 00:13:24.464 END TEST nvmf_fused_ordering 00:13:24.464 ************************************ 00:13:24.464 07:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:24.464 07:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:24.464 07:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.464 07:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:24.464 ************************************ 00:13:24.464 START TEST nvmf_ns_masking 00:13:24.464 ************************************ 00:13:24.464 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:24.724 * Looking for test storage... 00:13:24.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.724 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:24.724 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:13:24.724 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:24.724 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:24.724 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.724 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.724 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.724 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.724 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.724 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.724 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.724 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.724 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.724 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.724 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.724 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:24.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.725 --rc genhtml_branch_coverage=1 00:13:24.725 --rc genhtml_function_coverage=1 00:13:24.725 --rc genhtml_legend=1 00:13:24.725 --rc geninfo_all_blocks=1 00:13:24.725 --rc geninfo_unexecuted_blocks=1 00:13:24.725 00:13:24.725 ' 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:24.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.725 --rc genhtml_branch_coverage=1 00:13:24.725 --rc genhtml_function_coverage=1 00:13:24.725 --rc genhtml_legend=1 00:13:24.725 --rc geninfo_all_blocks=1 00:13:24.725 --rc geninfo_unexecuted_blocks=1 00:13:24.725 00:13:24.725 ' 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:24.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.725 --rc genhtml_branch_coverage=1 00:13:24.725 --rc genhtml_function_coverage=1 00:13:24.725 --rc genhtml_legend=1 00:13:24.725 --rc geninfo_all_blocks=1 00:13:24.725 --rc geninfo_unexecuted_blocks=1 00:13:24.725 00:13:24.725 ' 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:24.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.725 --rc genhtml_branch_coverage=1 00:13:24.725 --rc genhtml_function_coverage=1 00:13:24.725 --rc genhtml_legend=1 00:13:24.725 --rc geninfo_all_blocks=1 00:13:24.725 --rc geninfo_unexecuted_blocks=1 00:13:24.725 00:13:24.725 ' 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.725 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:24.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7609c160-747f-4c33-82b3-47c9fbe7b696 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=78a43829-b8c0-4473-bae8-019e02dee19b 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=db18469b-e334-4b96-ac86-caf718f0b39e 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:24.726 07:56:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:30.007 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.007 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:30.007 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:30.007 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:30.007 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:30.007 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:30.007 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:30.007 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:30.007 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:30.007 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:30.007 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:30.007 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:30.007 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:30.008 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:30.008 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:30.008 Found net devices under 0000:86:00.0: cvl_0_0 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:30.008 Found net devices under 0000:86:00.1: cvl_0_1 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:30.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:13:30.008 00:13:30.008 --- 10.0.0.2 ping statistics --- 00:13:30.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.008 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:13:30.008 00:13:30.008 --- 10.0.0.1 ping statistics --- 00:13:30.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.008 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:30.008 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:30.009 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:30.009 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2403656 00:13:30.009 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2403656 00:13:30.009 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2403656 ']' 00:13:30.009 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.009 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.009 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.009 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.009 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:30.009 07:56:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:30.009 [2024-11-27 07:56:23.985573] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:13:30.009 [2024-11-27 07:56:23.985619] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.009 [2024-11-27 07:56:24.051253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.009 [2024-11-27 07:56:24.092291] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.009 [2024-11-27 07:56:24.092331] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.009 [2024-11-27 07:56:24.092339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.009 [2024-11-27 07:56:24.092345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.009 [2024-11-27 07:56:24.092350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.009 [2024-11-27 07:56:24.092909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.268 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.268 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:30.268 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:30.268 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:30.268 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:30.268 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.268 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:30.528 [2024-11-27 07:56:24.390779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.528 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:30.528 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:30.528 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:30.528 Malloc1 00:13:30.528 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:30.787 Malloc2 00:13:30.787 07:56:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:31.046 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:31.305 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.305 [2024-11-27 07:56:25.387567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.305 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:31.305 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I db18469b-e334-4b96-ac86-caf718f0b39e -a 10.0.0.2 -s 4420 -i 4 00:13:31.564 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:31.564 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:31.564 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.564 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:31.564 07:56:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:34.101 [ 0]:0x1 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c028e2403cc493b9e16b3a97b29cb9a 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c028e2403cc493b9e16b3a97b29cb9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.101 [ 0]:0x1 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c028e2403cc493b9e16b3a97b29cb9a 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c028e2403cc493b9e16b3a97b29cb9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.101 07:56:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:34.101 [ 1]:0x2 00:13:34.101 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:34.101 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.101 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca5201f9ea5e4d0ab3a79488d47ecce5 00:13:34.101 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca5201f9ea5e4d0ab3a79488d47ecce5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.101 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:34.101 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.101 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.360 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:34.620 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:34.620 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I db18469b-e334-4b96-ac86-caf718f0b39e -a 10.0.0.2 -s 4420 -i 4 00:13:34.620 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:34.620 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:34.620 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.620 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:34.620 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:34.620 07:56:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:37.155 [ 0]:0x2 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca5201f9ea5e4d0ab3a79488d47ecce5 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca5201f9ea5e4d0ab3a79488d47ecce5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.155 07:56:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:37.155 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:37.155 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.155 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:37.155 [ 0]:0x1 00:13:37.155 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:37.155 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.155 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c028e2403cc493b9e16b3a97b29cb9a 00:13:37.155 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c028e2403cc493b9e16b3a97b29cb9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.155 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:37.155 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.155 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:37.155 [ 1]:0x2 00:13:37.155 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:37.155 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.155 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca5201f9ea5e4d0ab3a79488d47ecce5 00:13:37.155 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca5201f9ea5e4d0ab3a79488d47ecce5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.155 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:37.415 [ 0]:0x2 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca5201f9ea5e4d0ab3a79488d47ecce5 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca5201f9ea5e4d0ab3a79488d47ecce5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:37.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.415 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:37.675 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:37.675 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I db18469b-e334-4b96-ac86-caf718f0b39e -a 10.0.0.2 -s 4420 -i 4 00:13:37.934 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:37.934 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:37.934 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.934 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:37.934 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:37.934 07:56:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:39.841 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:39.841 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:39.841 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:39.841 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:39.841 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.841 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:39.841 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:39.841 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:40.100 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:40.100 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:40.100 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:40.100 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.101 07:56:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:40.101 [ 0]:0x1 00:13:40.101 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:40.101 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.101 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c028e2403cc493b9e16b3a97b29cb9a 00:13:40.101 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c028e2403cc493b9e16b3a97b29cb9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.101 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:40.101 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.101 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:40.101 [ 1]:0x2 00:13:40.101 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:40.101 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.101 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca5201f9ea5e4d0ab3a79488d47ecce5 00:13:40.101 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca5201f9ea5e4d0ab3a79488d47ecce5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.101 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:40.360 [ 0]:0x2 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca5201f9ea5e4d0ab3a79488d47ecce5 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca5201f9ea5e4d0ab3a79488d47ecce5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:40.360 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:40.620 [2024-11-27 07:56:34.633528] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:40.620 request: 00:13:40.620 { 00:13:40.620 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:40.620 "nsid": 2, 00:13:40.620 "host": "nqn.2016-06.io.spdk:host1", 00:13:40.620 "method": "nvmf_ns_remove_host", 00:13:40.620 "req_id": 1 00:13:40.620 } 00:13:40.620 Got JSON-RPC error response 00:13:40.620 response: 00:13:40.620 { 00:13:40.620 "code": -32602, 00:13:40.620 "message": "Invalid parameters" 00:13:40.620 } 00:13:40.620 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:40.620 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:40.620 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:40.620 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:40.620 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:40.620 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:40.620 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:40.620 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:40.620 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.620 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:40.620 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.620 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:40.621 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.621 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:40.621 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:40.621 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.621 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:40.621 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.621 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:40.621 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:40.621 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:40.621 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:40.621 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:40.621 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:40.621 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.621 [ 0]:0x2 00:13:40.621 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:40.621 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.881 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca5201f9ea5e4d0ab3a79488d47ecce5 00:13:40.881 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca5201f9ea5e4d0ab3a79488d47ecce5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.881 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:40.881 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.881 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2405557 00:13:40.881 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:40.881 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.881 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2405557 /var/tmp/host.sock 00:13:40.881 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2405557 ']' 00:13:40.881 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:40.881 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.881 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:40.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:40.881 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.881 07:56:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:40.881 [2024-11-27 07:56:34.943831] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:13:40.881 [2024-11-27 07:56:34.943877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2405557 ] 00:13:41.141 [2024-11-27 07:56:35.007957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.141 [2024-11-27 07:56:35.048916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.400 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.400 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:41.400 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.400 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:41.659 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7609c160-747f-4c33-82b3-47c9fbe7b696 00:13:41.659 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:41.659 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7609C160747F4C3382B347C9FBE7B696 -i 00:13:41.919 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 78a43829-b8c0-4473-bae8-019e02dee19b 00:13:41.919 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:41.919 07:56:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 78A43829B8C04473BAE8019E02DEE19B -i 00:13:41.919 07:56:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:42.178 07:56:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:42.437 07:56:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:42.437 07:56:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:42.696 nvme0n1 00:13:42.696 07:56:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:42.696 07:56:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:42.955 nvme1n2 00:13:42.955 07:56:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:42.955 07:56:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:42.955 07:56:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:42.955 07:56:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:42.955 07:56:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:43.214 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:43.214 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:43.214 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:43.214 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:43.473 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7609c160-747f-4c33-82b3-47c9fbe7b696 == \7\6\0\9\c\1\6\0\-\7\4\7\f\-\4\c\3\3\-\8\2\b\3\-\4\7\c\9\f\b\e\7\b\6\9\6 ]] 00:13:43.473 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:43.473 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:43.473 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:43.473 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 78a43829-b8c0-4473-bae8-019e02dee19b == \7\8\a\4\3\8\2\9\-\b\8\c\0\-\4\4\7\3\-\b\a\e\8\-\0\1\9\e\0\2\d\e\e\1\9\b ]] 00:13:43.473 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.732 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:43.992 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 7609c160-747f-4c33-82b3-47c9fbe7b696 00:13:43.992 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:43.992 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7609C160747F4C3382B347C9FBE7B696 00:13:43.992 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:43.992 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7609C160747F4C3382B347C9FBE7B696 00:13:43.992 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:43.992 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.992 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:43.992 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.992 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:43.992 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.992 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:43.992 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:43.992 07:56:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7609C160747F4C3382B347C9FBE7B696 00:13:43.992 [2024-11-27 07:56:38.083176] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:43.992 [2024-11-27 07:56:38.083209] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:43.992 [2024-11-27 07:56:38.083218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.992 request: 00:13:43.992 { 00:13:43.992 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:43.992 "namespace": { 00:13:43.992 "bdev_name": "invalid", 00:13:43.992 "nsid": 1, 00:13:43.992 "nguid": "7609C160747F4C3382B347C9FBE7B696", 00:13:43.992 "no_auto_visible": false, 00:13:43.992 "hide_metadata": false 00:13:43.992 }, 00:13:43.992 "method": "nvmf_subsystem_add_ns", 00:13:43.992 "req_id": 1 00:13:43.992 } 00:13:43.992 Got JSON-RPC error response 00:13:43.992 response: 00:13:43.992 { 00:13:43.992 "code": -32602, 00:13:43.992 "message": "Invalid parameters" 00:13:43.992 } 00:13:43.992 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:43.992 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:43.992 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:44.251 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:44.251 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 7609c160-747f-4c33-82b3-47c9fbe7b696 00:13:44.251 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:44.251 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7609C160747F4C3382B347C9FBE7B696 -i 00:13:44.251 07:56:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:46.799 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:46.800 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:46.800 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:46.800 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:46.800 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2405557 00:13:46.800 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2405557 ']' 00:13:46.800 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2405557 00:13:46.800 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:46.800 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.800 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2405557 00:13:46.800 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:46.800 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:46.800 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2405557' 00:13:46.800 killing process with pid 2405557 00:13:46.800 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2405557 00:13:46.800 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2405557 00:13:46.800 07:56:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:47.059 rmmod nvme_tcp 00:13:47.059 rmmod nvme_fabrics 00:13:47.059 rmmod nvme_keyring 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2403656 ']' 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2403656 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2403656 ']' 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2403656 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2403656 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:47.059 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2403656' 00:13:47.059 killing process with pid 2403656 00:13:47.319 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2403656 00:13:47.319 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2403656 00:13:47.319 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:47.319 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:47.319 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:47.319 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:47.319 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:47.319 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:47.319 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:47.319 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:47.319 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:47.319 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.319 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:47.319 07:56:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:49.856 00:13:49.856 real 0m24.911s 00:13:49.856 user 0m30.004s 00:13:49.856 sys 0m6.474s 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:49.856 ************************************ 00:13:49.856 END TEST nvmf_ns_masking 00:13:49.856 ************************************ 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:49.856 ************************************ 00:13:49.856 START TEST nvmf_nvme_cli 00:13:49.856 ************************************ 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:49.856 * Looking for test storage... 00:13:49.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:49.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.856 --rc genhtml_branch_coverage=1 00:13:49.856 --rc genhtml_function_coverage=1 00:13:49.856 --rc genhtml_legend=1 00:13:49.856 --rc geninfo_all_blocks=1 00:13:49.856 --rc geninfo_unexecuted_blocks=1 00:13:49.856 00:13:49.856 ' 00:13:49.856 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:49.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.856 --rc genhtml_branch_coverage=1 00:13:49.857 --rc genhtml_function_coverage=1 00:13:49.857 --rc genhtml_legend=1 00:13:49.857 --rc geninfo_all_blocks=1 00:13:49.857 --rc geninfo_unexecuted_blocks=1 00:13:49.857 00:13:49.857 ' 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:49.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.857 --rc genhtml_branch_coverage=1 00:13:49.857 --rc genhtml_function_coverage=1 00:13:49.857 --rc genhtml_legend=1 00:13:49.857 --rc geninfo_all_blocks=1 00:13:49.857 --rc geninfo_unexecuted_blocks=1 00:13:49.857 00:13:49.857 ' 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:49.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.857 --rc genhtml_branch_coverage=1 00:13:49.857 --rc genhtml_function_coverage=1 00:13:49.857 --rc genhtml_legend=1 00:13:49.857 --rc geninfo_all_blocks=1 00:13:49.857 --rc geninfo_unexecuted_blocks=1 00:13:49.857 00:13:49.857 ' 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:49.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:49.857 07:56:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:55.134 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:55.134 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:55.134 Found net devices under 0000:86:00.0: cvl_0_0 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:55.134 Found net devices under 0000:86:00.1: cvl_0_1 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.134 07:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.134 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.134 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.134 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:55.134 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.134 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.134 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.134 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:55.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:13:55.135 00:13:55.135 --- 10.0.0.2 ping statistics --- 00:13:55.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.135 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:13:55.135 00:13:55.135 --- 10.0.0.1 ping statistics --- 00:13:55.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.135 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2410199 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2410199 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2410199 ']' 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:55.135 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.392 [2024-11-27 07:56:49.254187] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:13:55.392 [2024-11-27 07:56:49.254241] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.392 [2024-11-27 07:56:49.320739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:55.392 [2024-11-27 07:56:49.365997] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.392 [2024-11-27 07:56:49.366035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.392 [2024-11-27 07:56:49.366045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.392 [2024-11-27 07:56:49.366051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.392 [2024-11-27 07:56:49.366056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.392 [2024-11-27 07:56:49.367664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.392 [2024-11-27 07:56:49.367687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.392 [2024-11-27 07:56:49.367778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:55.392 [2024-11-27 07:56:49.367779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.392 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.392 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:55.392 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:55.392 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:55.392 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.392 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.651 [2024-11-27 07:56:49.506592] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.651 Malloc0 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.651 Malloc1 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.651 [2024-11-27 07:56:49.601757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:55.651 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.652 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.652 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.652 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:55.910 00:13:55.910 Discovery Log Number of Records 2, Generation counter 2 00:13:55.910 =====Discovery Log Entry 0====== 00:13:55.910 trtype: tcp 00:13:55.910 adrfam: ipv4 00:13:55.910 subtype: current discovery subsystem 00:13:55.910 treq: not required 00:13:55.910 portid: 0 00:13:55.910 trsvcid: 4420 00:13:55.910 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:55.910 traddr: 10.0.0.2 00:13:55.910 eflags: explicit discovery connections, duplicate discovery information 00:13:55.910 sectype: none 00:13:55.910 =====Discovery Log Entry 1====== 00:13:55.910 trtype: tcp 00:13:55.910 adrfam: ipv4 00:13:55.910 subtype: nvme subsystem 00:13:55.910 treq: not required 00:13:55.910 portid: 0 00:13:55.910 trsvcid: 4420 00:13:55.910 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:55.910 traddr: 10.0.0.2 00:13:55.910 eflags: none 00:13:55.910 sectype: none 00:13:55.910 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:55.910 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:55.910 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:55.910 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.910 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:55.910 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:55.910 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.910 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:55.910 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.910 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:55.910 07:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:56.845 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:56.845 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:57.104 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:57.104 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:57.104 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:57.104 07:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:59.020 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:59.020 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:59.020 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:59.020 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:59.020 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:59.020 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:59.020 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:59.020 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:59.020 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:59.020 07:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:59.020 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:59.020 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:59.020 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:59.020 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:59.020 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:59.020 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:59.020 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:59.020 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:59.020 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:59.020 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:59.021 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:59.021 /dev/nvme0n2 ]] 00:13:59.021 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:59.021 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:59.021 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:59.021 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:59.021 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:59.278 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:59.278 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:59.279 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:59.279 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:59.279 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:59.279 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:59.279 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:59.279 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:59.279 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:59.279 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:59.279 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:59.279 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:59.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.537 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:59.537 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:59.537 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:59.537 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.537 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:59.537 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.537 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:59.537 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:59.537 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.537 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.537 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:59.538 rmmod nvme_tcp 00:13:59.538 rmmod nvme_fabrics 00:13:59.538 rmmod nvme_keyring 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2410199 ']' 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2410199 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2410199 ']' 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2410199 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:59.538 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2410199 00:13:59.797 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:59.797 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:59.797 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2410199' 00:13:59.797 killing process with pid 2410199 00:13:59.797 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2410199 00:13:59.797 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2410199 00:13:59.797 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:59.797 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:59.797 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:59.797 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:59.797 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:59.797 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:59.797 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:59.797 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:59.797 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:59.797 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.797 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.797 07:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.393 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:02.393 00:14:02.393 real 0m12.465s 00:14:02.393 user 0m19.482s 00:14:02.393 sys 0m4.825s 00:14:02.393 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.393 07:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:02.393 ************************************ 00:14:02.393 END TEST nvmf_nvme_cli 00:14:02.393 ************************************ 00:14:02.393 07:56:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:02.393 07:56:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:02.393 07:56:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:02.393 07:56:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.393 07:56:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:02.393 ************************************ 00:14:02.393 START TEST nvmf_vfio_user 00:14:02.393 ************************************ 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:02.393 * Looking for test storage... 00:14:02.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:02.393 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:02.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.394 --rc genhtml_branch_coverage=1 00:14:02.394 --rc genhtml_function_coverage=1 00:14:02.394 --rc genhtml_legend=1 00:14:02.394 --rc geninfo_all_blocks=1 00:14:02.394 --rc geninfo_unexecuted_blocks=1 00:14:02.394 00:14:02.394 ' 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:02.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.394 --rc genhtml_branch_coverage=1 00:14:02.394 --rc genhtml_function_coverage=1 00:14:02.394 --rc genhtml_legend=1 00:14:02.394 --rc geninfo_all_blocks=1 00:14:02.394 --rc geninfo_unexecuted_blocks=1 00:14:02.394 00:14:02.394 ' 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:02.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.394 --rc genhtml_branch_coverage=1 00:14:02.394 --rc genhtml_function_coverage=1 00:14:02.394 --rc genhtml_legend=1 00:14:02.394 --rc geninfo_all_blocks=1 00:14:02.394 --rc geninfo_unexecuted_blocks=1 00:14:02.394 00:14:02.394 ' 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:02.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.394 --rc genhtml_branch_coverage=1 00:14:02.394 --rc genhtml_function_coverage=1 00:14:02.394 --rc genhtml_legend=1 00:14:02.394 --rc geninfo_all_blocks=1 00:14:02.394 --rc geninfo_unexecuted_blocks=1 00:14:02.394 00:14:02.394 ' 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:02.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2411495 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2411495' 00:14:02.394 Process pid: 2411495 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2411495 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2411495 ']' 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:02.394 [2024-11-27 07:56:56.285518] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:14:02.394 [2024-11-27 07:56:56.285569] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.394 [2024-11-27 07:56:56.347838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.394 [2024-11-27 07:56:56.387772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.394 [2024-11-27 07:56:56.387818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.394 [2024-11-27 07:56:56.387824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.394 [2024-11-27 07:56:56.387831] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.394 [2024-11-27 07:56:56.387836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.394 [2024-11-27 07:56:56.389389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.394 [2024-11-27 07:56:56.389484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.394 [2024-11-27 07:56:56.389595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.394 [2024-11-27 07:56:56.389596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:02.394 07:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:03.766 07:56:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:03.766 07:56:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:03.766 07:56:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:03.766 07:56:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:03.766 07:56:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:03.766 07:56:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:04.024 Malloc1 00:14:04.024 07:56:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:04.283 07:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:04.283 07:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:04.541 07:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:04.541 07:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:04.541 07:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:04.799 Malloc2 00:14:04.799 07:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:05.057 07:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:05.314 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:05.314 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:05.314 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:05.314 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:05.314 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:05.314 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:05.314 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:05.314 [2024-11-27 07:56:59.409289] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:14:05.314 [2024-11-27 07:56:59.409321] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411980 ] 00:14:05.575 [2024-11-27 07:56:59.448898] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:05.575 [2024-11-27 07:56:59.458241] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:05.575 [2024-11-27 07:56:59.458262] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f814d974000 00:14:05.575 [2024-11-27 07:56:59.459235] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:05.575 [2024-11-27 07:56:59.460243] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:05.575 [2024-11-27 07:56:59.461250] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:05.575 [2024-11-27 07:56:59.462248] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:05.575 [2024-11-27 07:56:59.463259] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:05.575 [2024-11-27 07:56:59.464261] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:05.575 [2024-11-27 07:56:59.465266] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:05.575 [2024-11-27 07:56:59.466269] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:05.575 [2024-11-27 07:56:59.467275] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:05.575 [2024-11-27 07:56:59.467284] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f814d969000 00:14:05.575 [2024-11-27 07:56:59.468228] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:05.575 [2024-11-27 07:56:59.477841] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:05.575 [2024-11-27 07:56:59.477865] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:05.575 [2024-11-27 07:56:59.482373] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:05.575 [2024-11-27 07:56:59.482415] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:05.575 [2024-11-27 07:56:59.482488] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:05.575 [2024-11-27 07:56:59.482504] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:05.575 [2024-11-27 07:56:59.482511] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:05.575 [2024-11-27 07:56:59.483371] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:05.575 [2024-11-27 07:56:59.483382] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:05.575 [2024-11-27 07:56:59.483389] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:05.575 [2024-11-27 07:56:59.484374] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:05.575 [2024-11-27 07:56:59.484382] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:05.575 [2024-11-27 07:56:59.484388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:05.575 [2024-11-27 07:56:59.485381] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:05.575 [2024-11-27 07:56:59.485390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:05.575 [2024-11-27 07:56:59.486389] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:05.575 [2024-11-27 07:56:59.486397] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:05.575 [2024-11-27 07:56:59.486402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:05.575 [2024-11-27 07:56:59.486408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:05.575 [2024-11-27 07:56:59.486516] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:05.575 [2024-11-27 07:56:59.486521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:05.575 [2024-11-27 07:56:59.486526] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:05.575 [2024-11-27 07:56:59.487394] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:05.575 [2024-11-27 07:56:59.488395] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:05.575 [2024-11-27 07:56:59.489406] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:05.575 [2024-11-27 07:56:59.490406] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:05.575 [2024-11-27 07:56:59.490482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:05.575 [2024-11-27 07:56:59.491413] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:05.575 [2024-11-27 07:56:59.491422] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:05.575 [2024-11-27 07:56:59.491426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:05.575 [2024-11-27 07:56:59.491446] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:05.575 [2024-11-27 07:56:59.491453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:05.575 [2024-11-27 07:56:59.491471] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:05.575 [2024-11-27 07:56:59.491476] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:05.576 [2024-11-27 07:56:59.491479] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:05.576 [2024-11-27 07:56:59.491492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:05.576 [2024-11-27 07:56:59.491547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:05.576 [2024-11-27 07:56:59.491556] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:05.576 [2024-11-27 07:56:59.491562] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:05.576 [2024-11-27 07:56:59.491566] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:05.576 [2024-11-27 07:56:59.491571] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:05.576 [2024-11-27 07:56:59.491576] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:05.576 [2024-11-27 07:56:59.491580] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:05.576 [2024-11-27 07:56:59.491584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491592] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491601] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:05.576 [2024-11-27 07:56:59.491611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:05.576 [2024-11-27 07:56:59.491620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.576 [2024-11-27 07:56:59.491628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.576 [2024-11-27 07:56:59.491635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.576 [2024-11-27 07:56:59.491642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.576 [2024-11-27 07:56:59.491646] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491662] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:05.576 [2024-11-27 07:56:59.491672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:05.576 [2024-11-27 07:56:59.491679] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:05.576 [2024-11-27 07:56:59.491684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:05.576 [2024-11-27 07:56:59.491716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:05.576 [2024-11-27 07:56:59.491768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491782] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:05.576 [2024-11-27 07:56:59.491786] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:05.576 [2024-11-27 07:56:59.491789] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:05.576 [2024-11-27 07:56:59.491794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:05.576 [2024-11-27 07:56:59.491806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:05.576 [2024-11-27 07:56:59.491817] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:05.576 [2024-11-27 07:56:59.491828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491841] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:05.576 [2024-11-27 07:56:59.491844] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:05.576 [2024-11-27 07:56:59.491847] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:05.576 [2024-11-27 07:56:59.491853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:05.576 [2024-11-27 07:56:59.491878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:05.576 [2024-11-27 07:56:59.491888] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491900] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:05.576 [2024-11-27 07:56:59.491905] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:05.576 [2024-11-27 07:56:59.491908] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:05.576 [2024-11-27 07:56:59.491915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:05.576 [2024-11-27 07:56:59.491928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:05.576 [2024-11-27 07:56:59.491937] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491943] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491974] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:05.576 [2024-11-27 07:56:59.491978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:05.576 [2024-11-27 07:56:59.491982] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:05.576 [2024-11-27 07:56:59.491999] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:05.576 [2024-11-27 07:56:59.492008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:05.576 [2024-11-27 07:56:59.492018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:05.577 [2024-11-27 07:56:59.492027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:05.577 [2024-11-27 07:56:59.492037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:05.577 [2024-11-27 07:56:59.492045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:05.577 [2024-11-27 07:56:59.492055] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:05.577 [2024-11-27 07:56:59.492065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:05.577 [2024-11-27 07:56:59.492077] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:05.577 [2024-11-27 07:56:59.492081] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:05.577 [2024-11-27 07:56:59.492084] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:05.577 [2024-11-27 07:56:59.492087] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:05.577 [2024-11-27 07:56:59.492090] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:05.577 [2024-11-27 07:56:59.492096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:05.577 [2024-11-27 07:56:59.492102] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:05.577 [2024-11-27 07:56:59.492107] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:05.577 [2024-11-27 07:56:59.492110] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:05.577 [2024-11-27 07:56:59.492116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:05.577 [2024-11-27 07:56:59.492122] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:05.577 [2024-11-27 07:56:59.492126] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:05.577 [2024-11-27 07:56:59.492129] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:05.577 [2024-11-27 07:56:59.492134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:05.577 [2024-11-27 07:56:59.492141] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:05.577 [2024-11-27 07:56:59.492144] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:05.577 [2024-11-27 07:56:59.492147] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:05.577 [2024-11-27 07:56:59.492153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:05.577 [2024-11-27 07:56:59.492159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:05.577 [2024-11-27 07:56:59.492171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:05.577 [2024-11-27 07:56:59.492180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:05.577 [2024-11-27 07:56:59.492186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:05.577 ===================================================== 00:14:05.577 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:05.577 ===================================================== 00:14:05.577 Controller Capabilities/Features 00:14:05.577 ================================ 00:14:05.577 Vendor ID: 4e58 00:14:05.577 Subsystem Vendor ID: 4e58 00:14:05.577 Serial Number: SPDK1 00:14:05.577 Model Number: SPDK bdev Controller 00:14:05.577 Firmware Version: 25.01 00:14:05.577 Recommended Arb Burst: 6 00:14:05.577 IEEE OUI Identifier: 8d 6b 50 00:14:05.577 Multi-path I/O 00:14:05.577 May have multiple subsystem ports: Yes 00:14:05.577 May have multiple controllers: Yes 00:14:05.577 Associated with SR-IOV VF: No 00:14:05.577 Max Data Transfer Size: 131072 00:14:05.577 Max Number of Namespaces: 32 00:14:05.577 Max Number of I/O Queues: 127 00:14:05.577 NVMe Specification Version (VS): 1.3 00:14:05.577 NVMe Specification Version (Identify): 1.3 00:14:05.577 Maximum Queue Entries: 256 00:14:05.577 Contiguous Queues Required: Yes 00:14:05.577 Arbitration Mechanisms Supported 00:14:05.577 Weighted Round Robin: Not Supported 00:14:05.577 Vendor Specific: Not Supported 00:14:05.577 Reset Timeout: 15000 ms 00:14:05.577 Doorbell Stride: 4 bytes 00:14:05.577 NVM Subsystem Reset: Not Supported 00:14:05.577 Command Sets Supported 00:14:05.577 NVM Command Set: Supported 00:14:05.577 Boot Partition: Not Supported 00:14:05.577 Memory Page Size Minimum: 4096 bytes 00:14:05.577 Memory Page Size Maximum: 4096 bytes 00:14:05.577 Persistent Memory Region: Not Supported 00:14:05.577 Optional Asynchronous Events Supported 00:14:05.577 Namespace Attribute Notices: Supported 00:14:05.577 Firmware Activation Notices: Not Supported 00:14:05.577 ANA Change Notices: Not Supported 00:14:05.577 PLE Aggregate Log Change Notices: Not Supported 00:14:05.577 LBA Status Info Alert Notices: Not Supported 00:14:05.577 EGE Aggregate Log Change Notices: Not Supported 00:14:05.577 Normal NVM Subsystem Shutdown event: Not Supported 00:14:05.577 Zone Descriptor Change Notices: Not Supported 00:14:05.577 Discovery Log Change Notices: Not Supported 00:14:05.577 Controller Attributes 00:14:05.577 128-bit Host Identifier: Supported 00:14:05.577 Non-Operational Permissive Mode: Not Supported 00:14:05.577 NVM Sets: Not Supported 00:14:05.577 Read Recovery Levels: Not Supported 00:14:05.577 Endurance Groups: Not Supported 00:14:05.577 Predictable Latency Mode: Not Supported 00:14:05.577 Traffic Based Keep ALive: Not Supported 00:14:05.577 Namespace Granularity: Not Supported 00:14:05.577 SQ Associations: Not Supported 00:14:05.577 UUID List: Not Supported 00:14:05.577 Multi-Domain Subsystem: Not Supported 00:14:05.577 Fixed Capacity Management: Not Supported 00:14:05.577 Variable Capacity Management: Not Supported 00:14:05.577 Delete Endurance Group: Not Supported 00:14:05.577 Delete NVM Set: Not Supported 00:14:05.577 Extended LBA Formats Supported: Not Supported 00:14:05.577 Flexible Data Placement Supported: Not Supported 00:14:05.577 00:14:05.577 Controller Memory Buffer Support 00:14:05.577 ================================ 00:14:05.577 Supported: No 00:14:05.577 00:14:05.577 Persistent Memory Region Support 00:14:05.577 ================================ 00:14:05.577 Supported: No 00:14:05.577 00:14:05.577 Admin Command Set Attributes 00:14:05.577 ============================ 00:14:05.577 Security Send/Receive: Not Supported 00:14:05.577 Format NVM: Not Supported 00:14:05.577 Firmware Activate/Download: Not Supported 00:14:05.577 Namespace Management: Not Supported 00:14:05.577 Device Self-Test: Not Supported 00:14:05.577 Directives: Not Supported 00:14:05.577 NVMe-MI: Not Supported 00:14:05.578 Virtualization Management: Not Supported 00:14:05.578 Doorbell Buffer Config: Not Supported 00:14:05.578 Get LBA Status Capability: Not Supported 00:14:05.578 Command & Feature Lockdown Capability: Not Supported 00:14:05.578 Abort Command Limit: 4 00:14:05.578 Async Event Request Limit: 4 00:14:05.578 Number of Firmware Slots: N/A 00:14:05.578 Firmware Slot 1 Read-Only: N/A 00:14:05.578 Firmware Activation Without Reset: N/A 00:14:05.578 Multiple Update Detection Support: N/A 00:14:05.578 Firmware Update Granularity: No Information Provided 00:14:05.578 Per-Namespace SMART Log: No 00:14:05.578 Asymmetric Namespace Access Log Page: Not Supported 00:14:05.578 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:05.578 Command Effects Log Page: Supported 00:14:05.578 Get Log Page Extended Data: Supported 00:14:05.578 Telemetry Log Pages: Not Supported 00:14:05.578 Persistent Event Log Pages: Not Supported 00:14:05.578 Supported Log Pages Log Page: May Support 00:14:05.578 Commands Supported & Effects Log Page: Not Supported 00:14:05.578 Feature Identifiers & Effects Log Page:May Support 00:14:05.578 NVMe-MI Commands & Effects Log Page: May Support 00:14:05.578 Data Area 4 for Telemetry Log: Not Supported 00:14:05.578 Error Log Page Entries Supported: 128 00:14:05.578 Keep Alive: Supported 00:14:05.578 Keep Alive Granularity: 10000 ms 00:14:05.578 00:14:05.578 NVM Command Set Attributes 00:14:05.578 ========================== 00:14:05.578 Submission Queue Entry Size 00:14:05.578 Max: 64 00:14:05.578 Min: 64 00:14:05.578 Completion Queue Entry Size 00:14:05.578 Max: 16 00:14:05.578 Min: 16 00:14:05.578 Number of Namespaces: 32 00:14:05.578 Compare Command: Supported 00:14:05.578 Write Uncorrectable Command: Not Supported 00:14:05.578 Dataset Management Command: Supported 00:14:05.578 Write Zeroes Command: Supported 00:14:05.578 Set Features Save Field: Not Supported 00:14:05.578 Reservations: Not Supported 00:14:05.578 Timestamp: Not Supported 00:14:05.578 Copy: Supported 00:14:05.578 Volatile Write Cache: Present 00:14:05.578 Atomic Write Unit (Normal): 1 00:14:05.578 Atomic Write Unit (PFail): 1 00:14:05.578 Atomic Compare & Write Unit: 1 00:14:05.578 Fused Compare & Write: Supported 00:14:05.578 Scatter-Gather List 00:14:05.578 SGL Command Set: Supported (Dword aligned) 00:14:05.578 SGL Keyed: Not Supported 00:14:05.578 SGL Bit Bucket Descriptor: Not Supported 00:14:05.578 SGL Metadata Pointer: Not Supported 00:14:05.578 Oversized SGL: Not Supported 00:14:05.578 SGL Metadata Address: Not Supported 00:14:05.578 SGL Offset: Not Supported 00:14:05.578 Transport SGL Data Block: Not Supported 00:14:05.578 Replay Protected Memory Block: Not Supported 00:14:05.578 00:14:05.578 Firmware Slot Information 00:14:05.578 ========================= 00:14:05.578 Active slot: 1 00:14:05.578 Slot 1 Firmware Revision: 25.01 00:14:05.578 00:14:05.578 00:14:05.578 Commands Supported and Effects 00:14:05.578 ============================== 00:14:05.578 Admin Commands 00:14:05.578 -------------- 00:14:05.578 Get Log Page (02h): Supported 00:14:05.578 Identify (06h): Supported 00:14:05.578 Abort (08h): Supported 00:14:05.578 Set Features (09h): Supported 00:14:05.578 Get Features (0Ah): Supported 00:14:05.578 Asynchronous Event Request (0Ch): Supported 00:14:05.578 Keep Alive (18h): Supported 00:14:05.578 I/O Commands 00:14:05.578 ------------ 00:14:05.578 Flush (00h): Supported LBA-Change 00:14:05.578 Write (01h): Supported LBA-Change 00:14:05.578 Read (02h): Supported 00:14:05.578 Compare (05h): Supported 00:14:05.578 Write Zeroes (08h): Supported LBA-Change 00:14:05.578 Dataset Management (09h): Supported LBA-Change 00:14:05.578 Copy (19h): Supported LBA-Change 00:14:05.578 00:14:05.578 Error Log 00:14:05.578 ========= 00:14:05.578 00:14:05.578 Arbitration 00:14:05.578 =========== 00:14:05.578 Arbitration Burst: 1 00:14:05.578 00:14:05.578 Power Management 00:14:05.578 ================ 00:14:05.578 Number of Power States: 1 00:14:05.578 Current Power State: Power State #0 00:14:05.578 Power State #0: 00:14:05.578 Max Power: 0.00 W 00:14:05.578 Non-Operational State: Operational 00:14:05.578 Entry Latency: Not Reported 00:14:05.578 Exit Latency: Not Reported 00:14:05.578 Relative Read Throughput: 0 00:14:05.578 Relative Read Latency: 0 00:14:05.578 Relative Write Throughput: 0 00:14:05.578 Relative Write Latency: 0 00:14:05.578 Idle Power: Not Reported 00:14:05.578 Active Power: Not Reported 00:14:05.578 Non-Operational Permissive Mode: Not Supported 00:14:05.578 00:14:05.578 Health Information 00:14:05.578 ================== 00:14:05.578 Critical Warnings: 00:14:05.578 Available Spare Space: OK 00:14:05.578 Temperature: OK 00:14:05.578 Device Reliability: OK 00:14:05.578 Read Only: No 00:14:05.578 Volatile Memory Backup: OK 00:14:05.578 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:05.578 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:05.578 Available Spare: 0% 00:14:05.578 Available Sp[2024-11-27 07:56:59.492272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:05.578 [2024-11-27 07:56:59.492281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:05.578 [2024-11-27 07:56:59.492308] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:05.578 [2024-11-27 07:56:59.492317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.578 [2024-11-27 07:56:59.492323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.578 [2024-11-27 07:56:59.492328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.578 [2024-11-27 07:56:59.492334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.578 [2024-11-27 07:56:59.492421] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:05.578 [2024-11-27 07:56:59.492430] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:05.578 [2024-11-27 07:56:59.493421] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:05.578 [2024-11-27 07:56:59.495959] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:05.578 [2024-11-27 07:56:59.495967] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:05.578 [2024-11-27 07:56:59.496447] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:05.579 [2024-11-27 07:56:59.496458] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:05.579 [2024-11-27 07:56:59.496508] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:05.579 [2024-11-27 07:56:59.498481] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:05.579 are Threshold: 0% 00:14:05.579 Life Percentage Used: 0% 00:14:05.579 Data Units Read: 0 00:14:05.579 Data Units Written: 0 00:14:05.579 Host Read Commands: 0 00:14:05.579 Host Write Commands: 0 00:14:05.579 Controller Busy Time: 0 minutes 00:14:05.579 Power Cycles: 0 00:14:05.579 Power On Hours: 0 hours 00:14:05.579 Unsafe Shutdowns: 0 00:14:05.579 Unrecoverable Media Errors: 0 00:14:05.579 Lifetime Error Log Entries: 0 00:14:05.579 Warning Temperature Time: 0 minutes 00:14:05.579 Critical Temperature Time: 0 minutes 00:14:05.579 00:14:05.579 Number of Queues 00:14:05.579 ================ 00:14:05.579 Number of I/O Submission Queues: 127 00:14:05.579 Number of I/O Completion Queues: 127 00:14:05.579 00:14:05.579 Active Namespaces 00:14:05.579 ================= 00:14:05.579 Namespace ID:1 00:14:05.579 Error Recovery Timeout: Unlimited 00:14:05.579 Command Set Identifier: NVM (00h) 00:14:05.579 Deallocate: Supported 00:14:05.579 Deallocated/Unwritten Error: Not Supported 00:14:05.579 Deallocated Read Value: Unknown 00:14:05.579 Deallocate in Write Zeroes: Not Supported 00:14:05.579 Deallocated Guard Field: 0xFFFF 00:14:05.579 Flush: Supported 00:14:05.579 Reservation: Supported 00:14:05.579 Namespace Sharing Capabilities: Multiple Controllers 00:14:05.579 Size (in LBAs): 131072 (0GiB) 00:14:05.579 Capacity (in LBAs): 131072 (0GiB) 00:14:05.579 Utilization (in LBAs): 131072 (0GiB) 00:14:05.579 NGUID: BAD6359DFA2D47258B2F9CA0879BE1C2 00:14:05.579 UUID: bad6359d-fa2d-4725-8b2f-9ca0879be1c2 00:14:05.579 Thin Provisioning: Not Supported 00:14:05.579 Per-NS Atomic Units: Yes 00:14:05.579 Atomic Boundary Size (Normal): 0 00:14:05.579 Atomic Boundary Size (PFail): 0 00:14:05.579 Atomic Boundary Offset: 0 00:14:05.579 Maximum Single Source Range Length: 65535 00:14:05.579 Maximum Copy Length: 65535 00:14:05.579 Maximum Source Range Count: 1 00:14:05.579 NGUID/EUI64 Never Reused: No 00:14:05.579 Namespace Write Protected: No 00:14:05.579 Number of LBA Formats: 1 00:14:05.579 Current LBA Format: LBA Format #00 00:14:05.579 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:05.579 00:14:05.579 07:56:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:05.838 [2024-11-27 07:56:59.732780] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:11.113 Initializing NVMe Controllers 00:14:11.113 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:11.113 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:11.113 Initialization complete. Launching workers. 00:14:11.113 ======================================================== 00:14:11.113 Latency(us) 00:14:11.113 Device Information : IOPS MiB/s Average min max 00:14:11.113 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39978.73 156.17 3203.03 982.85 7568.57 00:14:11.113 ======================================================== 00:14:11.113 Total : 39978.73 156.17 3203.03 982.85 7568.57 00:14:11.113 00:14:11.113 [2024-11-27 07:57:04.753184] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:11.113 07:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:11.113 [2024-11-27 07:57:04.991287] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:16.387 Initializing NVMe Controllers 00:14:16.387 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:16.387 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:16.387 Initialization complete. Launching workers. 00:14:16.387 ======================================================== 00:14:16.387 Latency(us) 00:14:16.387 Device Information : IOPS MiB/s Average min max 00:14:16.387 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15918.12 62.18 8040.46 4987.31 15961.31 00:14:16.387 ======================================================== 00:14:16.387 Total : 15918.12 62.18 8040.46 4987.31 15961.31 00:14:16.387 00:14:16.387 [2024-11-27 07:57:10.022517] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:16.387 07:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:16.387 [2024-11-27 07:57:10.222510] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:21.659 [2024-11-27 07:57:15.292203] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:21.659 Initializing NVMe Controllers 00:14:21.659 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:21.659 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:21.659 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:21.659 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:21.659 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:21.659 Initialization complete. Launching workers. 00:14:21.659 Starting thread on core 2 00:14:21.659 Starting thread on core 3 00:14:21.659 Starting thread on core 1 00:14:21.659 07:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:21.659 [2024-11-27 07:57:15.583396] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:24.951 [2024-11-27 07:57:18.720155] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:24.951 Initializing NVMe Controllers 00:14:24.951 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:24.951 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:24.951 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:24.951 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:24.951 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:24.951 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:24.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:24.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:24.951 Initialization complete. Launching workers. 00:14:24.951 Starting thread on core 1 with urgent priority queue 00:14:24.951 Starting thread on core 2 with urgent priority queue 00:14:24.951 Starting thread on core 3 with urgent priority queue 00:14:24.951 Starting thread on core 0 with urgent priority queue 00:14:24.951 SPDK bdev Controller (SPDK1 ) core 0: 6832.00 IO/s 14.64 secs/100000 ios 00:14:24.951 SPDK bdev Controller (SPDK1 ) core 1: 8462.00 IO/s 11.82 secs/100000 ios 00:14:24.951 SPDK bdev Controller (SPDK1 ) core 2: 9283.33 IO/s 10.77 secs/100000 ios 00:14:24.951 SPDK bdev Controller (SPDK1 ) core 3: 6570.67 IO/s 15.22 secs/100000 ios 00:14:24.951 ======================================================== 00:14:24.951 00:14:24.951 07:57:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:24.951 [2024-11-27 07:57:19.002567] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:24.951 Initializing NVMe Controllers 00:14:24.951 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:24.951 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:24.951 Namespace ID: 1 size: 0GB 00:14:24.952 Initialization complete. 00:14:24.952 INFO: using host memory buffer for IO 00:14:24.952 Hello world! 00:14:24.952 [2024-11-27 07:57:19.035825] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:25.233 07:57:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:25.233 [2024-11-27 07:57:19.325430] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:26.613 Initializing NVMe Controllers 00:14:26.613 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:26.613 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:26.613 Initialization complete. Launching workers. 00:14:26.613 submit (in ns) avg, min, max = 5942.5, 3202.6, 4000248.7 00:14:26.613 complete (in ns) avg, min, max = 21904.9, 1786.1, 4994136.5 00:14:26.613 00:14:26.613 Submit histogram 00:14:26.613 ================ 00:14:26.613 Range in us Cumulative Count 00:14:26.613 3.200 - 3.214: 0.0124% ( 2) 00:14:26.613 3.214 - 3.228: 0.0186% ( 1) 00:14:26.613 3.228 - 3.242: 0.0310% ( 2) 00:14:26.613 3.242 - 3.256: 0.0372% ( 1) 00:14:26.613 3.256 - 3.270: 0.0930% ( 9) 00:14:26.613 3.270 - 3.283: 0.8430% ( 121) 00:14:26.613 3.283 - 3.297: 3.5393% ( 435) 00:14:26.613 3.297 - 3.311: 6.6944% ( 509) 00:14:26.613 3.311 - 3.325: 10.2709% ( 577) 00:14:26.613 3.325 - 3.339: 14.8391% ( 737) 00:14:26.613 3.339 - 3.353: 20.6595% ( 939) 00:14:26.613 3.353 - 3.367: 25.9468% ( 853) 00:14:26.613 3.367 - 3.381: 31.2093% ( 849) 00:14:26.613 3.381 - 3.395: 36.6392% ( 876) 00:14:26.613 3.395 - 3.409: 41.7777% ( 829) 00:14:26.613 3.409 - 3.423: 46.2282% ( 718) 00:14:26.613 3.423 - 3.437: 52.0114% ( 933) 00:14:26.613 3.437 - 3.450: 57.7698% ( 929) 00:14:26.613 3.450 - 3.464: 61.8732% ( 662) 00:14:26.613 3.464 - 3.478: 66.6708% ( 774) 00:14:26.613 3.478 - 3.492: 72.3300% ( 913) 00:14:26.613 3.492 - 3.506: 76.5264% ( 677) 00:14:26.613 3.506 - 3.520: 79.3343% ( 453) 00:14:26.613 3.520 - 3.534: 82.0058% ( 431) 00:14:26.613 3.534 - 3.548: 84.2063% ( 355) 00:14:26.613 3.548 - 3.562: 85.5638% ( 219) 00:14:26.613 3.562 - 3.590: 87.2125% ( 266) 00:14:26.613 3.590 - 3.617: 88.4460% ( 199) 00:14:26.613 3.617 - 3.645: 89.9337% ( 240) 00:14:26.613 3.645 - 3.673: 91.5825% ( 266) 00:14:26.613 3.673 - 3.701: 93.4916% ( 308) 00:14:26.613 3.701 - 3.729: 95.1962% ( 275) 00:14:26.613 3.729 - 3.757: 96.4917% ( 209) 00:14:26.613 3.757 - 3.784: 97.6260% ( 183) 00:14:26.613 3.784 - 3.812: 98.3016% ( 109) 00:14:26.613 3.812 - 3.840: 98.8037% ( 81) 00:14:26.613 3.840 - 3.868: 99.1136% ( 50) 00:14:26.613 3.868 - 3.896: 99.2190% ( 17) 00:14:26.613 3.896 - 3.923: 99.2810% ( 10) 00:14:26.613 3.923 - 3.951: 99.2934% ( 2) 00:14:26.613 3.951 - 3.979: 99.3430% ( 8) 00:14:26.613 3.979 - 4.007: 99.3554% ( 2) 00:14:26.613 4.007 - 4.035: 99.3678% ( 2) 00:14:26.613 4.035 - 4.063: 99.3925% ( 4) 00:14:26.613 4.063 - 4.090: 99.4235% ( 5) 00:14:26.613 4.090 - 4.118: 99.4421% ( 3) 00:14:26.613 4.118 - 4.146: 99.4669% ( 4) 00:14:26.613 4.146 - 4.174: 99.4855% ( 3) 00:14:26.613 4.174 - 4.202: 99.5041% ( 3) 00:14:26.613 4.202 - 4.230: 99.5289% ( 4) 00:14:26.613 4.230 - 4.257: 99.5413% ( 2) 00:14:26.613 4.257 - 4.285: 99.5475% ( 1) 00:14:26.613 4.285 - 4.313: 99.5599% ( 2) 00:14:26.613 4.341 - 4.369: 99.5661% ( 1) 00:14:26.613 4.369 - 4.397: 99.5723% ( 1) 00:14:26.613 4.424 - 4.452: 99.5785% ( 1) 00:14:26.613 4.480 - 4.508: 99.5847% ( 1) 00:14:26.613 4.508 - 4.536: 99.5971% ( 2) 00:14:26.613 4.563 - 4.591: 99.6033% ( 1) 00:14:26.613 4.591 - 4.619: 99.6095% ( 1) 00:14:26.613 4.647 - 4.675: 99.6219% ( 2) 00:14:26.613 4.675 - 4.703: 99.6281% ( 1) 00:14:26.613 4.703 - 4.730: 99.6343% ( 1) 00:14:26.613 4.758 - 4.786: 99.6405% ( 1) 00:14:26.613 4.870 - 4.897: 99.6467% ( 1) 00:14:26.613 4.981 - 5.009: 99.6529% ( 1) 00:14:26.613 5.064 - 5.092: 99.6591% ( 1) 00:14:26.613 5.148 - 5.176: 99.6653% ( 1) 00:14:26.613 5.315 - 5.343: 99.6715% ( 1) 00:14:26.613 6.205 - 6.233: 99.6777% ( 1) 00:14:26.613 6.483 - 6.511: 99.6901% ( 2) 00:14:26.613 6.511 - 6.539: 99.6963% ( 1) 00:14:26.613 6.539 - 6.567: 99.7025% ( 1) 00:14:26.613 6.567 - 6.595: 99.7087% ( 1) 00:14:26.613 6.623 - 6.650: 99.7149% ( 1) 00:14:26.613 6.650 - 6.678: 99.7211% ( 1) 00:14:26.613 6.706 - 6.734: 99.7273% ( 1) 00:14:26.613 6.929 - 6.957: 99.7335% ( 1) 00:14:26.613 6.984 - 7.012: 99.7521% ( 3) 00:14:26.613 7.068 - 7.096: 99.7583% ( 1) 00:14:26.613 7.096 - 7.123: 99.7645% ( 1) 00:14:26.613 7.123 - 7.179: 99.7769% ( 2) 00:14:26.613 7.179 - 7.235: 99.7893% ( 2) 00:14:26.613 7.290 - 7.346: 99.7955% ( 1) 00:14:26.613 7.346 - 7.402: 99.8016% ( 1) 00:14:26.613 7.569 - 7.624: 99.8078% ( 1) 00:14:26.613 7.624 - 7.680: 99.8140% ( 1) 00:14:26.613 7.680 - 7.736: 99.8326% ( 3) 00:14:26.613 7.791 - 7.847: 99.8388% ( 1) 00:14:26.613 7.847 - 7.903: 99.8450% ( 1) 00:14:26.613 7.903 - 7.958: 99.8574% ( 2) 00:14:26.613 7.958 - 8.014: 99.8698% ( 2) 00:14:26.613 8.181 - 8.237: 99.8760% ( 1) 00:14:26.613 8.237 - 8.292: 99.8822% ( 1) 00:14:26.613 8.348 - 8.403: 99.8946% ( 2) 00:14:26.613 8.459 - 8.515: 99.9008% ( 1) 00:14:26.613 8.515 - 8.570: 99.9070% ( 1) 00:14:26.613 8.570 - 8.626: 99.9132% ( 1) 00:14:26.613 8.626 - 8.682: 99.9194% ( 1) 00:14:26.613 8.849 - 8.904: 99.9256% ( 1) 00:14:26.613 10.240 - 10.296: 99.9318% ( 1) 00:14:26.613 23.263 - 23.374: 99.9380% ( 1) 00:14:26.613 3989.148 - 4017.642: 100.0000% ( 10) 00:14:26.613 00:14:26.613 Complete histogram 00:14:26.613 ================== 00:14:26.613 Range in us Cumulative Count 00:14:26.613 1.781 - 1.795: 0.0062% ( 1) 00:14:26.613 1.809 - 1.823: 0.7500% ( 120) 00:14:26.613 1.823 - 1.837: 19.3888% ( 3007) 00:14:26.613 1.837 - 1.850: 36.8127% ( 2811) 00:14:26.613 1.850 - 1.864: 41.1889% ( 706) 00:14:26.613 1.864 - 1.878: 43.7116% ( 407) 00:14:26.613 1.878 - 1.892: 64.0488% ( 3281) 00:14:26.613 1.892 - 1.906: 89.0101% ( 4027) 00:14:26.613 1.906 - 1.920: 94.6197% ( 905) 00:14:26.613 1.920 - 1.934: 96.4669% ( 298) 00:14:26.613 1.934 - 1.948: 96.8450% ( 61) 00:14:26.613 1.948 - 1.962: 97.4152% ( 92) 00:14:26.613 1.962 - 1.976: 98.0661% ( 105) 00:14:26.613 1.976 - 1.990: 98.4690% ( 65) 00:14:26.613 1.990 - 2.003: 98.5929% ( 20) 00:14:26.613 2.003 - 2.017: 98.6487% ( 9) 00:14:26.613 2.017 - 2.031: 98.6859% ( 6) 00:14:26.613 2.031 - 2.045: 98.7293% ( 7) 00:14:26.613 2.045 - 2.059: 98.7665% ( 6) 00:14:26.613 2.059 - 2.073: 98.7727% ( 1) 00:14:26.613 2.073 - 2.087: 98.8037% ( 5) 00:14:26.613 2.087 - 2.101: 98.8161% ( 2) 00:14:26.613 2.101 - 2.115: 98.8409% ( 4) 00:14:26.613 2.115 - 2.129: 98.8533% ( 2) 00:14:26.613 2.129 - 2.143: 98.8657% ( 2) 00:14:26.613 2.143 - 2.157: 98.8719% ( 1) 00:14:26.613 2.157 - 2.170: 98.8905% ( 3) 00:14:26.613 2.184 - 2.198: 98.9029% ( 2) 00:14:26.613 2.198 - 2.212: 98.9091% ( 1) 00:14:26.613 2.212 - 2.226: 98.9215% ( 2) 00:14:26.613 2.226 - 2.240: 98.9277% ( 1) 00:14:26.613 2.240 - 2.254: 98.9401% ( 2) 00:14:26.613 2.254 - 2.268: 98.9463% ( 1) 00:14:26.613 2.268 - 2.282: 98.9835% ( 6) 00:14:26.613 2.282 - 2.296: 98.9958% ( 2) 00:14:26.613 2.296 - 2.310: 99.0206% ( 4) 00:14:26.613 2.323 - 2.337: 99.0268% ( 1) 00:14:26.613 2.337 - 2.351: 99.0330% ( 1) 00:14:26.613 2.351 - 2.365: 99.0454% ( 2) 00:14:26.613 2.365 - 2.379: 99.0578% ( 2) 00:14:26.613 2.379 - 2.393: 99.0764% ( 3) 00:14:26.613 2.407 - 2.421: 99.0888% ( 2) 00:14:26.613 2.435 - 2.449: 99.0950% ( 1) 00:14:26.613 2.449 - 2.463: 99.1136% ( 3) 00:14:26.613 2.463 - 2.477: 99.1198% ( 1) 00:14:26.613 2.477 - 2.490: 99.1322% ( 2) 00:14:26.613 2.490 - 2.504: 99.1384% ( 1) 00:14:26.613 2.504 - 2.518: 99.1508% ( 2) 00:14:26.613 2.518 - 2.532: 99.1570% ( 1) 00:14:26.613 2.532 - 2.546: 99.1632% ( 1) 00:14:26.613 2.546 - 2.560: 99.1756% ( 2) 00:14:26.613 2.560 - 2.574: 99.1880% ( 2) 00:14:26.613 2.574 - 2.588: 99.2066% ( 3) 00:14:26.613 2.588 - 2.602: 99.2128% ( 1) 00:14:26.613 2.602 - 2.616: 99.2190% ( 1) 00:14:26.613 2.657 - 2.671: 99.2376% ( 3) 00:14:26.613 2.671 - 2.685: 99.2438% ( 1) 00:14:26.613 2.713 - 2.727: 99.2500% ( 1) 00:14:26.613 2.741 - 2.755: 99.2686% ( 3) 00:14:26.613 2.755 - 2.769: 99.2810% ( 2) 00:14:26.613 2.810 - 2.824: 99.2934% ( 2) 00:14:26.613 2.838 - 2.8[2024-11-27 07:57:20.347502] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:26.613 52: 99.2996% ( 1) 00:14:26.613 2.880 - 2.894: 99.3120% ( 2) 00:14:26.613 2.922 - 2.936: 99.3182% ( 1) 00:14:26.613 2.977 - 2.991: 99.3244% ( 1) 00:14:26.613 4.369 - 4.397: 99.3306% ( 1) 00:14:26.613 4.480 - 4.508: 99.3368% ( 1) 00:14:26.613 4.591 - 4.619: 99.3430% ( 1) 00:14:26.613 4.619 - 4.647: 99.3492% ( 1) 00:14:26.613 4.647 - 4.675: 99.3554% ( 1) 00:14:26.613 4.703 - 4.730: 99.3616% ( 1) 00:14:26.613 4.786 - 4.814: 99.3678% ( 1) 00:14:26.613 5.510 - 5.537: 99.3740% ( 1) 00:14:26.613 5.537 - 5.565: 99.3802% ( 1) 00:14:26.613 5.704 - 5.732: 99.3864% ( 1) 00:14:26.613 6.150 - 6.177: 99.3925% ( 1) 00:14:26.613 6.261 - 6.289: 99.3987% ( 1) 00:14:26.613 6.650 - 6.678: 99.4049% ( 1) 00:14:26.613 6.706 - 6.734: 99.4111% ( 1) 00:14:26.613 6.845 - 6.873: 99.4235% ( 2) 00:14:26.613 6.873 - 6.901: 99.4297% ( 1) 00:14:26.613 7.179 - 7.235: 99.4483% ( 3) 00:14:26.613 7.235 - 7.290: 99.4545% ( 1) 00:14:26.613 7.513 - 7.569: 99.4607% ( 1) 00:14:26.614 7.791 - 7.847: 99.4669% ( 1) 00:14:26.614 9.016 - 9.071: 99.4731% ( 1) 00:14:26.614 10.741 - 10.797: 99.4793% ( 1) 00:14:26.614 12.856 - 12.911: 99.4855% ( 1) 00:14:26.614 13.301 - 13.357: 99.4917% ( 1) 00:14:26.614 29.162 - 29.384: 99.4979% ( 1) 00:14:26.614 2137.043 - 2151.290: 99.5041% ( 1) 00:14:26.614 3989.148 - 4017.642: 99.9876% ( 78) 00:14:26.614 4160.111 - 4188.605: 99.9938% ( 1) 00:14:26.614 4986.435 - 5014.929: 100.0000% ( 1) 00:14:26.614 00:14:26.614 07:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:26.614 07:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:26.614 07:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:26.614 07:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:26.614 07:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:26.614 [ 00:14:26.614 { 00:14:26.614 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:26.614 "subtype": "Discovery", 00:14:26.614 "listen_addresses": [], 00:14:26.614 "allow_any_host": true, 00:14:26.614 "hosts": [] 00:14:26.614 }, 00:14:26.614 { 00:14:26.614 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:26.614 "subtype": "NVMe", 00:14:26.614 "listen_addresses": [ 00:14:26.614 { 00:14:26.614 "trtype": "VFIOUSER", 00:14:26.614 "adrfam": "IPv4", 00:14:26.614 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:26.614 "trsvcid": "0" 00:14:26.614 } 00:14:26.614 ], 00:14:26.614 "allow_any_host": true, 00:14:26.614 "hosts": [], 00:14:26.614 "serial_number": "SPDK1", 00:14:26.614 "model_number": "SPDK bdev Controller", 00:14:26.614 "max_namespaces": 32, 00:14:26.614 "min_cntlid": 1, 00:14:26.614 "max_cntlid": 65519, 00:14:26.614 "namespaces": [ 00:14:26.614 { 00:14:26.614 "nsid": 1, 00:14:26.614 "bdev_name": "Malloc1", 00:14:26.614 "name": "Malloc1", 00:14:26.614 "nguid": "BAD6359DFA2D47258B2F9CA0879BE1C2", 00:14:26.614 "uuid": "bad6359d-fa2d-4725-8b2f-9ca0879be1c2" 00:14:26.614 } 00:14:26.614 ] 00:14:26.614 }, 00:14:26.614 { 00:14:26.614 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:26.614 "subtype": "NVMe", 00:14:26.614 "listen_addresses": [ 00:14:26.614 { 00:14:26.614 "trtype": "VFIOUSER", 00:14:26.614 "adrfam": "IPv4", 00:14:26.614 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:26.614 "trsvcid": "0" 00:14:26.614 } 00:14:26.614 ], 00:14:26.614 "allow_any_host": true, 00:14:26.614 "hosts": [], 00:14:26.614 "serial_number": "SPDK2", 00:14:26.614 "model_number": "SPDK bdev Controller", 00:14:26.614 "max_namespaces": 32, 00:14:26.614 "min_cntlid": 1, 00:14:26.614 "max_cntlid": 65519, 00:14:26.614 "namespaces": [ 00:14:26.614 { 00:14:26.614 "nsid": 1, 00:14:26.614 "bdev_name": "Malloc2", 00:14:26.614 "name": "Malloc2", 00:14:26.614 "nguid": "B53B10F9E72A4DD6BA2689EE188B6FC8", 00:14:26.614 "uuid": "b53b10f9-e72a-4dd6-ba26-89ee188b6fc8" 00:14:26.614 } 00:14:26.614 ] 00:14:26.614 } 00:14:26.614 ] 00:14:26.614 07:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:26.614 07:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:26.614 07:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2415943 00:14:26.614 07:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:26.614 07:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:26.614 07:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:26.614 07:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:26.614 07:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:26.614 07:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:26.614 07:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:26.873 [2024-11-27 07:57:20.745593] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:26.873 Malloc3 00:14:26.873 07:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:27.131 [2024-11-27 07:57:20.986438] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:27.131 07:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:27.131 Asynchronous Event Request test 00:14:27.131 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:27.131 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:27.131 Registering asynchronous event callbacks... 00:14:27.131 Starting namespace attribute notice tests for all controllers... 00:14:27.131 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:27.131 aer_cb - Changed Namespace 00:14:27.131 Cleaning up... 00:14:27.131 [ 00:14:27.131 { 00:14:27.131 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:27.131 "subtype": "Discovery", 00:14:27.131 "listen_addresses": [], 00:14:27.131 "allow_any_host": true, 00:14:27.131 "hosts": [] 00:14:27.131 }, 00:14:27.131 { 00:14:27.131 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:27.131 "subtype": "NVMe", 00:14:27.131 "listen_addresses": [ 00:14:27.131 { 00:14:27.131 "trtype": "VFIOUSER", 00:14:27.131 "adrfam": "IPv4", 00:14:27.131 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:27.131 "trsvcid": "0" 00:14:27.131 } 00:14:27.131 ], 00:14:27.131 "allow_any_host": true, 00:14:27.131 "hosts": [], 00:14:27.131 "serial_number": "SPDK1", 00:14:27.131 "model_number": "SPDK bdev Controller", 00:14:27.131 "max_namespaces": 32, 00:14:27.131 "min_cntlid": 1, 00:14:27.131 "max_cntlid": 65519, 00:14:27.131 "namespaces": [ 00:14:27.131 { 00:14:27.131 "nsid": 1, 00:14:27.131 "bdev_name": "Malloc1", 00:14:27.131 "name": "Malloc1", 00:14:27.131 "nguid": "BAD6359DFA2D47258B2F9CA0879BE1C2", 00:14:27.131 "uuid": "bad6359d-fa2d-4725-8b2f-9ca0879be1c2" 00:14:27.131 }, 00:14:27.131 { 00:14:27.131 "nsid": 2, 00:14:27.131 "bdev_name": "Malloc3", 00:14:27.131 "name": "Malloc3", 00:14:27.131 "nguid": "18042E37EBC744E687D6CCCDDC4AB258", 00:14:27.131 "uuid": "18042e37-ebc7-44e6-87d6-cccddc4ab258" 00:14:27.131 } 00:14:27.131 ] 00:14:27.131 }, 00:14:27.131 { 00:14:27.131 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:27.131 "subtype": "NVMe", 00:14:27.131 "listen_addresses": [ 00:14:27.131 { 00:14:27.131 "trtype": "VFIOUSER", 00:14:27.131 "adrfam": "IPv4", 00:14:27.132 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:27.132 "trsvcid": "0" 00:14:27.132 } 00:14:27.132 ], 00:14:27.132 "allow_any_host": true, 00:14:27.132 "hosts": [], 00:14:27.132 "serial_number": "SPDK2", 00:14:27.132 "model_number": "SPDK bdev Controller", 00:14:27.132 "max_namespaces": 32, 00:14:27.132 "min_cntlid": 1, 00:14:27.132 "max_cntlid": 65519, 00:14:27.132 "namespaces": [ 00:14:27.132 { 00:14:27.132 "nsid": 1, 00:14:27.132 "bdev_name": "Malloc2", 00:14:27.132 "name": "Malloc2", 00:14:27.132 "nguid": "B53B10F9E72A4DD6BA2689EE188B6FC8", 00:14:27.132 "uuid": "b53b10f9-e72a-4dd6-ba26-89ee188b6fc8" 00:14:27.132 } 00:14:27.132 ] 00:14:27.132 } 00:14:27.132 ] 00:14:27.132 07:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2415943 00:14:27.132 07:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:27.132 07:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:27.132 07:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:27.132 07:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:27.392 [2024-11-27 07:57:21.251211] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:14:27.393 [2024-11-27 07:57:21.251245] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2416175 ] 00:14:27.393 [2024-11-27 07:57:21.290745] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:27.393 [2024-11-27 07:57:21.294997] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:27.393 [2024-11-27 07:57:21.295020] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f79eafc8000 00:14:27.393 [2024-11-27 07:57:21.295994] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.393 [2024-11-27 07:57:21.297003] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.393 [2024-11-27 07:57:21.298012] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.393 [2024-11-27 07:57:21.299016] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:27.393 [2024-11-27 07:57:21.300020] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:27.393 [2024-11-27 07:57:21.301024] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.393 [2024-11-27 07:57:21.302027] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:27.393 [2024-11-27 07:57:21.303040] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.393 [2024-11-27 07:57:21.304049] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:27.393 [2024-11-27 07:57:21.304061] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f79eafbd000 00:14:27.393 [2024-11-27 07:57:21.305002] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:27.393 [2024-11-27 07:57:21.319405] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:27.393 [2024-11-27 07:57:21.319430] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:27.393 [2024-11-27 07:57:21.321487] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:27.393 [2024-11-27 07:57:21.321526] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:27.393 [2024-11-27 07:57:21.321599] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:27.393 [2024-11-27 07:57:21.321612] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:27.393 [2024-11-27 07:57:21.321618] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:27.393 [2024-11-27 07:57:21.322491] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:27.393 [2024-11-27 07:57:21.322503] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:27.393 [2024-11-27 07:57:21.322509] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:27.393 [2024-11-27 07:57:21.323493] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:27.393 [2024-11-27 07:57:21.323502] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:27.393 [2024-11-27 07:57:21.323509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:27.393 [2024-11-27 07:57:21.324500] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:27.393 [2024-11-27 07:57:21.324509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:27.393 [2024-11-27 07:57:21.325511] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:27.393 [2024-11-27 07:57:21.325520] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:27.393 [2024-11-27 07:57:21.325525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:27.393 [2024-11-27 07:57:21.325531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:27.393 [2024-11-27 07:57:21.325639] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:27.393 [2024-11-27 07:57:21.325643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:27.393 [2024-11-27 07:57:21.325650] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:27.393 [2024-11-27 07:57:21.326521] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:27.393 [2024-11-27 07:57:21.327525] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:27.393 [2024-11-27 07:57:21.328530] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:27.393 [2024-11-27 07:57:21.329539] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:27.393 [2024-11-27 07:57:21.329577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:27.393 [2024-11-27 07:57:21.330548] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:27.393 [2024-11-27 07:57:21.330557] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:27.393 [2024-11-27 07:57:21.330562] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.330579] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:27.393 [2024-11-27 07:57:21.330586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.330600] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:27.393 [2024-11-27 07:57:21.330604] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:27.393 [2024-11-27 07:57:21.330608] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.393 [2024-11-27 07:57:21.330618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:27.393 [2024-11-27 07:57:21.336956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:27.393 [2024-11-27 07:57:21.336967] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:27.393 [2024-11-27 07:57:21.336971] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:27.393 [2024-11-27 07:57:21.336975] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:27.393 [2024-11-27 07:57:21.336979] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:27.393 [2024-11-27 07:57:21.336984] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:27.393 [2024-11-27 07:57:21.336988] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:27.393 [2024-11-27 07:57:21.336992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.336999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.337008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:27.393 [2024-11-27 07:57:21.344953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:27.393 [2024-11-27 07:57:21.344965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.393 [2024-11-27 07:57:21.344972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.393 [2024-11-27 07:57:21.344980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.393 [2024-11-27 07:57:21.344987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.393 [2024-11-27 07:57:21.344992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.345001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.345009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:27.393 [2024-11-27 07:57:21.352954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:27.393 [2024-11-27 07:57:21.352962] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:27.393 [2024-11-27 07:57:21.352967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.352977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.352982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.352991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:27.393 [2024-11-27 07:57:21.360961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:27.393 [2024-11-27 07:57:21.361016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.361024] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.361031] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:27.393 [2024-11-27 07:57:21.361035] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:27.393 [2024-11-27 07:57:21.361038] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.393 [2024-11-27 07:57:21.361044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:27.393 [2024-11-27 07:57:21.368955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:27.393 [2024-11-27 07:57:21.368968] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:27.393 [2024-11-27 07:57:21.368978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.368985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.368993] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:27.393 [2024-11-27 07:57:21.368997] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:27.393 [2024-11-27 07:57:21.369001] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.393 [2024-11-27 07:57:21.369006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:27.393 [2024-11-27 07:57:21.376952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:27.393 [2024-11-27 07:57:21.376963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.376971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.376977] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:27.393 [2024-11-27 07:57:21.376982] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:27.393 [2024-11-27 07:57:21.376985] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.393 [2024-11-27 07:57:21.376990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:27.393 [2024-11-27 07:57:21.384953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:27.393 [2024-11-27 07:57:21.384965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.384972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.384979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.384985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.384989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.384994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.384998] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:27.393 [2024-11-27 07:57:21.385002] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:27.393 [2024-11-27 07:57:21.385007] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:27.393 [2024-11-27 07:57:21.385024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:27.393 [2024-11-27 07:57:21.392952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:27.393 [2024-11-27 07:57:21.392965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:27.393 [2024-11-27 07:57:21.400952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:27.393 [2024-11-27 07:57:21.400964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:27.393 [2024-11-27 07:57:21.408953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:27.393 [2024-11-27 07:57:21.408966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:27.393 [2024-11-27 07:57:21.416954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:27.393 [2024-11-27 07:57:21.416970] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:27.393 [2024-11-27 07:57:21.416975] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:27.393 [2024-11-27 07:57:21.416978] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:27.393 [2024-11-27 07:57:21.416981] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:27.393 [2024-11-27 07:57:21.416984] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:27.393 [2024-11-27 07:57:21.416990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:27.393 [2024-11-27 07:57:21.416996] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:27.393 [2024-11-27 07:57:21.417000] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:27.393 [2024-11-27 07:57:21.417003] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.393 [2024-11-27 07:57:21.417009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:27.393 [2024-11-27 07:57:21.417016] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:27.393 [2024-11-27 07:57:21.417019] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:27.393 [2024-11-27 07:57:21.417023] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.393 [2024-11-27 07:57:21.417028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:27.393 [2024-11-27 07:57:21.417035] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:27.393 [2024-11-27 07:57:21.417039] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:27.393 [2024-11-27 07:57:21.417042] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.393 [2024-11-27 07:57:21.417047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:27.393 [2024-11-27 07:57:21.424952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:27.393 [2024-11-27 07:57:21.424965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:27.393 [2024-11-27 07:57:21.424975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:27.393 [2024-11-27 07:57:21.424982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:27.393 ===================================================== 00:14:27.393 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:27.393 ===================================================== 00:14:27.393 Controller Capabilities/Features 00:14:27.394 ================================ 00:14:27.394 Vendor ID: 4e58 00:14:27.394 Subsystem Vendor ID: 4e58 00:14:27.394 Serial Number: SPDK2 00:14:27.394 Model Number: SPDK bdev Controller 00:14:27.394 Firmware Version: 25.01 00:14:27.394 Recommended Arb Burst: 6 00:14:27.394 IEEE OUI Identifier: 8d 6b 50 00:14:27.394 Multi-path I/O 00:14:27.394 May have multiple subsystem ports: Yes 00:14:27.394 May have multiple controllers: Yes 00:14:27.394 Associated with SR-IOV VF: No 00:14:27.394 Max Data Transfer Size: 131072 00:14:27.394 Max Number of Namespaces: 32 00:14:27.394 Max Number of I/O Queues: 127 00:14:27.394 NVMe Specification Version (VS): 1.3 00:14:27.394 NVMe Specification Version (Identify): 1.3 00:14:27.394 Maximum Queue Entries: 256 00:14:27.394 Contiguous Queues Required: Yes 00:14:27.394 Arbitration Mechanisms Supported 00:14:27.394 Weighted Round Robin: Not Supported 00:14:27.394 Vendor Specific: Not Supported 00:14:27.394 Reset Timeout: 15000 ms 00:14:27.394 Doorbell Stride: 4 bytes 00:14:27.394 NVM Subsystem Reset: Not Supported 00:14:27.394 Command Sets Supported 00:14:27.394 NVM Command Set: Supported 00:14:27.394 Boot Partition: Not Supported 00:14:27.394 Memory Page Size Minimum: 4096 bytes 00:14:27.394 Memory Page Size Maximum: 4096 bytes 00:14:27.394 Persistent Memory Region: Not Supported 00:14:27.394 Optional Asynchronous Events Supported 00:14:27.394 Namespace Attribute Notices: Supported 00:14:27.394 Firmware Activation Notices: Not Supported 00:14:27.394 ANA Change Notices: Not Supported 00:14:27.394 PLE Aggregate Log Change Notices: Not Supported 00:14:27.394 LBA Status Info Alert Notices: Not Supported 00:14:27.394 EGE Aggregate Log Change Notices: Not Supported 00:14:27.394 Normal NVM Subsystem Shutdown event: Not Supported 00:14:27.394 Zone Descriptor Change Notices: Not Supported 00:14:27.394 Discovery Log Change Notices: Not Supported 00:14:27.394 Controller Attributes 00:14:27.394 128-bit Host Identifier: Supported 00:14:27.394 Non-Operational Permissive Mode: Not Supported 00:14:27.394 NVM Sets: Not Supported 00:14:27.394 Read Recovery Levels: Not Supported 00:14:27.394 Endurance Groups: Not Supported 00:14:27.394 Predictable Latency Mode: Not Supported 00:14:27.394 Traffic Based Keep ALive: Not Supported 00:14:27.394 Namespace Granularity: Not Supported 00:14:27.394 SQ Associations: Not Supported 00:14:27.394 UUID List: Not Supported 00:14:27.394 Multi-Domain Subsystem: Not Supported 00:14:27.394 Fixed Capacity Management: Not Supported 00:14:27.394 Variable Capacity Management: Not Supported 00:14:27.394 Delete Endurance Group: Not Supported 00:14:27.394 Delete NVM Set: Not Supported 00:14:27.394 Extended LBA Formats Supported: Not Supported 00:14:27.394 Flexible Data Placement Supported: Not Supported 00:14:27.394 00:14:27.394 Controller Memory Buffer Support 00:14:27.394 ================================ 00:14:27.394 Supported: No 00:14:27.394 00:14:27.394 Persistent Memory Region Support 00:14:27.394 ================================ 00:14:27.394 Supported: No 00:14:27.394 00:14:27.394 Admin Command Set Attributes 00:14:27.394 ============================ 00:14:27.394 Security Send/Receive: Not Supported 00:14:27.394 Format NVM: Not Supported 00:14:27.394 Firmware Activate/Download: Not Supported 00:14:27.394 Namespace Management: Not Supported 00:14:27.394 Device Self-Test: Not Supported 00:14:27.394 Directives: Not Supported 00:14:27.394 NVMe-MI: Not Supported 00:14:27.394 Virtualization Management: Not Supported 00:14:27.394 Doorbell Buffer Config: Not Supported 00:14:27.394 Get LBA Status Capability: Not Supported 00:14:27.394 Command & Feature Lockdown Capability: Not Supported 00:14:27.394 Abort Command Limit: 4 00:14:27.394 Async Event Request Limit: 4 00:14:27.394 Number of Firmware Slots: N/A 00:14:27.394 Firmware Slot 1 Read-Only: N/A 00:14:27.394 Firmware Activation Without Reset: N/A 00:14:27.394 Multiple Update Detection Support: N/A 00:14:27.394 Firmware Update Granularity: No Information Provided 00:14:27.394 Per-Namespace SMART Log: No 00:14:27.394 Asymmetric Namespace Access Log Page: Not Supported 00:14:27.394 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:27.394 Command Effects Log Page: Supported 00:14:27.394 Get Log Page Extended Data: Supported 00:14:27.394 Telemetry Log Pages: Not Supported 00:14:27.394 Persistent Event Log Pages: Not Supported 00:14:27.394 Supported Log Pages Log Page: May Support 00:14:27.394 Commands Supported & Effects Log Page: Not Supported 00:14:27.394 Feature Identifiers & Effects Log Page:May Support 00:14:27.394 NVMe-MI Commands & Effects Log Page: May Support 00:14:27.394 Data Area 4 for Telemetry Log: Not Supported 00:14:27.394 Error Log Page Entries Supported: 128 00:14:27.394 Keep Alive: Supported 00:14:27.394 Keep Alive Granularity: 10000 ms 00:14:27.394 00:14:27.394 NVM Command Set Attributes 00:14:27.394 ========================== 00:14:27.394 Submission Queue Entry Size 00:14:27.394 Max: 64 00:14:27.394 Min: 64 00:14:27.394 Completion Queue Entry Size 00:14:27.394 Max: 16 00:14:27.394 Min: 16 00:14:27.394 Number of Namespaces: 32 00:14:27.394 Compare Command: Supported 00:14:27.394 Write Uncorrectable Command: Not Supported 00:14:27.394 Dataset Management Command: Supported 00:14:27.394 Write Zeroes Command: Supported 00:14:27.394 Set Features Save Field: Not Supported 00:14:27.394 Reservations: Not Supported 00:14:27.394 Timestamp: Not Supported 00:14:27.394 Copy: Supported 00:14:27.394 Volatile Write Cache: Present 00:14:27.394 Atomic Write Unit (Normal): 1 00:14:27.394 Atomic Write Unit (PFail): 1 00:14:27.394 Atomic Compare & Write Unit: 1 00:14:27.394 Fused Compare & Write: Supported 00:14:27.394 Scatter-Gather List 00:14:27.394 SGL Command Set: Supported (Dword aligned) 00:14:27.394 SGL Keyed: Not Supported 00:14:27.394 SGL Bit Bucket Descriptor: Not Supported 00:14:27.394 SGL Metadata Pointer: Not Supported 00:14:27.394 Oversized SGL: Not Supported 00:14:27.394 SGL Metadata Address: Not Supported 00:14:27.394 SGL Offset: Not Supported 00:14:27.394 Transport SGL Data Block: Not Supported 00:14:27.394 Replay Protected Memory Block: Not Supported 00:14:27.394 00:14:27.394 Firmware Slot Information 00:14:27.394 ========================= 00:14:27.394 Active slot: 1 00:14:27.394 Slot 1 Firmware Revision: 25.01 00:14:27.394 00:14:27.394 00:14:27.394 Commands Supported and Effects 00:14:27.394 ============================== 00:14:27.394 Admin Commands 00:14:27.394 -------------- 00:14:27.394 Get Log Page (02h): Supported 00:14:27.394 Identify (06h): Supported 00:14:27.394 Abort (08h): Supported 00:14:27.394 Set Features (09h): Supported 00:14:27.394 Get Features (0Ah): Supported 00:14:27.394 Asynchronous Event Request (0Ch): Supported 00:14:27.394 Keep Alive (18h): Supported 00:14:27.394 I/O Commands 00:14:27.394 ------------ 00:14:27.394 Flush (00h): Supported LBA-Change 00:14:27.394 Write (01h): Supported LBA-Change 00:14:27.394 Read (02h): Supported 00:14:27.394 Compare (05h): Supported 00:14:27.394 Write Zeroes (08h): Supported LBA-Change 00:14:27.394 Dataset Management (09h): Supported LBA-Change 00:14:27.394 Copy (19h): Supported LBA-Change 00:14:27.394 00:14:27.394 Error Log 00:14:27.394 ========= 00:14:27.394 00:14:27.394 Arbitration 00:14:27.394 =========== 00:14:27.394 Arbitration Burst: 1 00:14:27.394 00:14:27.394 Power Management 00:14:27.394 ================ 00:14:27.394 Number of Power States: 1 00:14:27.394 Current Power State: Power State #0 00:14:27.394 Power State #0: 00:14:27.394 Max Power: 0.00 W 00:14:27.394 Non-Operational State: Operational 00:14:27.394 Entry Latency: Not Reported 00:14:27.394 Exit Latency: Not Reported 00:14:27.394 Relative Read Throughput: 0 00:14:27.394 Relative Read Latency: 0 00:14:27.394 Relative Write Throughput: 0 00:14:27.394 Relative Write Latency: 0 00:14:27.394 Idle Power: Not Reported 00:14:27.394 Active Power: Not Reported 00:14:27.394 Non-Operational Permissive Mode: Not Supported 00:14:27.394 00:14:27.394 Health Information 00:14:27.394 ================== 00:14:27.394 Critical Warnings: 00:14:27.394 Available Spare Space: OK 00:14:27.394 Temperature: OK 00:14:27.394 Device Reliability: OK 00:14:27.394 Read Only: No 00:14:27.394 Volatile Memory Backup: OK 00:14:27.394 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:27.394 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:27.394 Available Spare: 0% 00:14:27.394 Available Sp[2024-11-27 07:57:21.425075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:27.394 [2024-11-27 07:57:21.432953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:27.394 [2024-11-27 07:57:21.432982] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:27.394 [2024-11-27 07:57:21.432993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.394 [2024-11-27 07:57:21.432999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.394 [2024-11-27 07:57:21.433005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.394 [2024-11-27 07:57:21.433010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.394 [2024-11-27 07:57:21.433065] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:27.394 [2024-11-27 07:57:21.433075] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:27.394 [2024-11-27 07:57:21.434073] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:27.394 [2024-11-27 07:57:21.434116] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:27.394 [2024-11-27 07:57:21.434122] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:27.394 [2024-11-27 07:57:21.435071] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:27.394 [2024-11-27 07:57:21.435083] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:27.394 [2024-11-27 07:57:21.435127] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:27.394 [2024-11-27 07:57:21.437953] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:27.394 are Threshold: 0% 00:14:27.394 Life Percentage Used: 0% 00:14:27.394 Data Units Read: 0 00:14:27.394 Data Units Written: 0 00:14:27.394 Host Read Commands: 0 00:14:27.394 Host Write Commands: 0 00:14:27.394 Controller Busy Time: 0 minutes 00:14:27.394 Power Cycles: 0 00:14:27.394 Power On Hours: 0 hours 00:14:27.394 Unsafe Shutdowns: 0 00:14:27.394 Unrecoverable Media Errors: 0 00:14:27.394 Lifetime Error Log Entries: 0 00:14:27.394 Warning Temperature Time: 0 minutes 00:14:27.394 Critical Temperature Time: 0 minutes 00:14:27.394 00:14:27.394 Number of Queues 00:14:27.394 ================ 00:14:27.394 Number of I/O Submission Queues: 127 00:14:27.394 Number of I/O Completion Queues: 127 00:14:27.394 00:14:27.394 Active Namespaces 00:14:27.394 ================= 00:14:27.394 Namespace ID:1 00:14:27.394 Error Recovery Timeout: Unlimited 00:14:27.394 Command Set Identifier: NVM (00h) 00:14:27.394 Deallocate: Supported 00:14:27.394 Deallocated/Unwritten Error: Not Supported 00:14:27.394 Deallocated Read Value: Unknown 00:14:27.394 Deallocate in Write Zeroes: Not Supported 00:14:27.394 Deallocated Guard Field: 0xFFFF 00:14:27.394 Flush: Supported 00:14:27.394 Reservation: Supported 00:14:27.394 Namespace Sharing Capabilities: Multiple Controllers 00:14:27.394 Size (in LBAs): 131072 (0GiB) 00:14:27.394 Capacity (in LBAs): 131072 (0GiB) 00:14:27.394 Utilization (in LBAs): 131072 (0GiB) 00:14:27.394 NGUID: B53B10F9E72A4DD6BA2689EE188B6FC8 00:14:27.394 UUID: b53b10f9-e72a-4dd6-ba26-89ee188b6fc8 00:14:27.394 Thin Provisioning: Not Supported 00:14:27.394 Per-NS Atomic Units: Yes 00:14:27.394 Atomic Boundary Size (Normal): 0 00:14:27.394 Atomic Boundary Size (PFail): 0 00:14:27.394 Atomic Boundary Offset: 0 00:14:27.394 Maximum Single Source Range Length: 65535 00:14:27.394 Maximum Copy Length: 65535 00:14:27.394 Maximum Source Range Count: 1 00:14:27.394 NGUID/EUI64 Never Reused: No 00:14:27.394 Namespace Write Protected: No 00:14:27.394 Number of LBA Formats: 1 00:14:27.394 Current LBA Format: LBA Format #00 00:14:27.394 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:27.394 00:14:27.394 07:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:27.653 [2024-11-27 07:57:21.667499] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:32.925 Initializing NVMe Controllers 00:14:32.925 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:32.925 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:32.925 Initialization complete. Launching workers. 00:14:32.925 ======================================================== 00:14:32.925 Latency(us) 00:14:32.925 Device Information : IOPS MiB/s Average min max 00:14:32.925 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39928.78 155.97 3205.31 997.46 6628.64 00:14:32.925 ======================================================== 00:14:32.925 Total : 39928.78 155.97 3205.31 997.46 6628.64 00:14:32.925 00:14:32.925 [2024-11-27 07:57:26.776200] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:32.925 07:57:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:32.925 [2024-11-27 07:57:27.007920] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:38.191 Initializing NVMe Controllers 00:14:38.191 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:38.191 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:38.191 Initialization complete. Launching workers. 00:14:38.191 ======================================================== 00:14:38.191 Latency(us) 00:14:38.191 Device Information : IOPS MiB/s Average min max 00:14:38.191 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39975.79 156.16 3202.01 1011.09 9564.93 00:14:38.191 ======================================================== 00:14:38.191 Total : 39975.79 156.16 3202.01 1011.09 9564.93 00:14:38.191 00:14:38.191 [2024-11-27 07:57:32.029654] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:38.191 07:57:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:38.191 [2024-11-27 07:57:32.244544] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:43.461 [2024-11-27 07:57:37.372049] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:43.461 Initializing NVMe Controllers 00:14:43.461 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:43.461 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:43.461 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:43.461 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:43.461 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:43.461 Initialization complete. Launching workers. 00:14:43.461 Starting thread on core 2 00:14:43.461 Starting thread on core 3 00:14:43.461 Starting thread on core 1 00:14:43.461 07:57:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:43.720 [2024-11-27 07:57:37.665744] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:47.007 [2024-11-27 07:57:40.769206] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:47.007 Initializing NVMe Controllers 00:14:47.007 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:47.007 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:47.007 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:47.007 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:47.007 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:47.007 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:47.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:47.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:47.007 Initialization complete. Launching workers. 00:14:47.007 Starting thread on core 1 with urgent priority queue 00:14:47.007 Starting thread on core 2 with urgent priority queue 00:14:47.007 Starting thread on core 3 with urgent priority queue 00:14:47.007 Starting thread on core 0 with urgent priority queue 00:14:47.007 SPDK bdev Controller (SPDK2 ) core 0: 8828.67 IO/s 11.33 secs/100000 ios 00:14:47.007 SPDK bdev Controller (SPDK2 ) core 1: 7130.67 IO/s 14.02 secs/100000 ios 00:14:47.007 SPDK bdev Controller (SPDK2 ) core 2: 6304.67 IO/s 15.86 secs/100000 ios 00:14:47.007 SPDK bdev Controller (SPDK2 ) core 3: 7485.33 IO/s 13.36 secs/100000 ios 00:14:47.007 ======================================================== 00:14:47.007 00:14:47.007 07:57:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:47.007 [2024-11-27 07:57:41.058431] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:47.007 Initializing NVMe Controllers 00:14:47.007 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:47.007 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:47.007 Namespace ID: 1 size: 0GB 00:14:47.007 Initialization complete. 00:14:47.007 INFO: using host memory buffer for IO 00:14:47.007 Hello world! 00:14:47.007 [2024-11-27 07:57:41.068495] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:47.007 07:57:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:47.264 [2024-11-27 07:57:41.347545] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:48.641 Initializing NVMe Controllers 00:14:48.641 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:48.641 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:48.641 Initialization complete. Launching workers. 00:14:48.641 submit (in ns) avg, min, max = 6197.7, 3246.1, 4000660.0 00:14:48.641 complete (in ns) avg, min, max = 20515.2, 1766.1, 5991559.1 00:14:48.641 00:14:48.641 Submit histogram 00:14:48.641 ================ 00:14:48.641 Range in us Cumulative Count 00:14:48.641 3.242 - 3.256: 0.0061% ( 1) 00:14:48.641 3.270 - 3.283: 0.0123% ( 1) 00:14:48.641 3.283 - 3.297: 0.2337% ( 36) 00:14:48.641 3.297 - 3.311: 1.4328% ( 195) 00:14:48.641 3.311 - 3.325: 3.2345% ( 293) 00:14:48.641 3.325 - 3.339: 5.4114% ( 354) 00:14:48.641 3.339 - 3.353: 8.1970% ( 453) 00:14:48.641 3.353 - 3.367: 12.4954% ( 699) 00:14:48.641 3.367 - 3.381: 17.6424% ( 837) 00:14:48.641 3.381 - 3.395: 23.3797% ( 933) 00:14:48.642 3.395 - 3.409: 28.9263% ( 902) 00:14:48.642 3.409 - 3.423: 34.6513% ( 931) 00:14:48.642 3.423 - 3.437: 39.8106% ( 839) 00:14:48.642 3.437 - 3.450: 44.6009% ( 779) 00:14:48.642 3.450 - 3.464: 50.2091% ( 912) 00:14:48.642 3.464 - 3.478: 55.4852% ( 858) 00:14:48.642 3.478 - 3.492: 59.7589% ( 695) 00:14:48.642 3.492 - 3.506: 64.3771% ( 751) 00:14:48.642 3.506 - 3.520: 70.3972% ( 979) 00:14:48.642 3.520 - 3.534: 74.9231% ( 736) 00:14:48.642 3.534 - 3.548: 78.2745% ( 545) 00:14:48.642 3.548 - 3.562: 81.8165% ( 576) 00:14:48.642 3.562 - 3.590: 86.2932% ( 728) 00:14:48.642 3.590 - 3.617: 87.9535% ( 270) 00:14:48.642 3.617 - 3.645: 88.9866% ( 168) 00:14:48.642 3.645 - 3.673: 90.3025% ( 214) 00:14:48.642 3.673 - 3.701: 91.9629% ( 270) 00:14:48.642 3.701 - 3.729: 93.7462% ( 290) 00:14:48.642 3.729 - 3.757: 95.1912% ( 235) 00:14:48.642 3.757 - 3.784: 96.6425% ( 236) 00:14:48.642 3.784 - 3.812: 97.8416% ( 195) 00:14:48.642 3.812 - 3.840: 98.6595% ( 133) 00:14:48.642 3.840 - 3.868: 99.0346% ( 61) 00:14:48.642 3.868 - 3.896: 99.3605% ( 53) 00:14:48.642 3.896 - 3.923: 99.5388% ( 29) 00:14:48.642 3.923 - 3.951: 99.6003% ( 10) 00:14:48.642 3.951 - 3.979: 99.6187% ( 3) 00:14:48.642 3.979 - 4.007: 99.6372% ( 3) 00:14:48.642 5.426 - 5.454: 99.6433% ( 1) 00:14:48.642 5.454 - 5.482: 99.6495% ( 1) 00:14:48.642 5.510 - 5.537: 99.6556% ( 1) 00:14:48.642 5.565 - 5.593: 99.6618% ( 1) 00:14:48.642 5.649 - 5.677: 99.6679% ( 1) 00:14:48.642 5.677 - 5.704: 99.6741% ( 1) 00:14:48.642 5.760 - 5.788: 99.6802% ( 1) 00:14:48.642 5.816 - 5.843: 99.6925% ( 2) 00:14:48.642 5.843 - 5.871: 99.6987% ( 1) 00:14:48.642 5.871 - 5.899: 99.7048% ( 1) 00:14:48.642 5.955 - 5.983: 99.7110% ( 1) 00:14:48.642 6.094 - 6.122: 99.7171% ( 1) 00:14:48.642 6.511 - 6.539: 99.7294% ( 2) 00:14:48.642 6.595 - 6.623: 99.7356% ( 1) 00:14:48.642 6.623 - 6.650: 99.7417% ( 1) 00:14:48.642 6.678 - 6.706: 99.7479% ( 1) 00:14:48.642 6.706 - 6.734: 99.7540% ( 1) 00:14:48.642 6.762 - 6.790: 99.7602% ( 1) 00:14:48.642 7.012 - 7.040: 99.7663% ( 1) 00:14:48.642 7.068 - 7.096: 99.7786% ( 2) 00:14:48.642 7.179 - 7.235: 99.7909% ( 2) 00:14:48.642 7.290 - 7.346: 99.7971% ( 1) 00:14:48.642 7.513 - 7.569: 99.8032% ( 1) 00:14:48.642 7.624 - 7.680: 99.8094% ( 1) 00:14:48.642 7.680 - 7.736: 99.8155% ( 1) 00:14:48.642 7.736 - 7.791: 99.8217% ( 1) 00:14:48.642 7.791 - 7.847: 99.8340% ( 2) 00:14:48.642 7.847 - 7.903: 99.8463% ( 2) 00:14:48.642 8.181 - 8.237: 99.8524% ( 1) 00:14:48.642 8.237 - 8.292: 99.8586% ( 1) 00:14:48.642 8.403 - 8.459: 99.8647% ( 1) 00:14:48.642 8.459 - 8.515: 99.8709% ( 1) 00:14:48.642 8.737 - 8.793: 99.8770% ( 1) 00:14:48.642 9.016 - 9.071: 99.8893% ( 2) 00:14:48.642 9.071 - 9.127: 99.8955% ( 1) 00:14:48.642 9.127 - 9.183: 99.9016% ( 1) 00:14:48.642 9.183 - 9.238: 99.9078% ( 1) 00:14:48.642 9.238 - 9.294: 99.9139% ( 1) 00:14:48.642 9.517 - 9.572: 99.9201% ( 1) 00:14:48.642 10.017 - 10.073: 99.9262% ( 1) 00:14:48.642 17.030 - 17.141: 99.9324% ( 1) 00:14:48.642 [2024-11-27 07:57:42.449020] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:48.642 3989.148 - 4017.642: 100.0000% ( 11) 00:14:48.642 00:14:48.642 Complete histogram 00:14:48.642 ================== 00:14:48.642 Range in us Cumulative Count 00:14:48.642 1.760 - 1.767: 0.0061% ( 1) 00:14:48.642 1.767 - 1.774: 0.0123% ( 1) 00:14:48.642 1.774 - 1.781: 0.0246% ( 2) 00:14:48.642 1.781 - 1.795: 0.0430% ( 3) 00:14:48.642 1.809 - 1.823: 0.3936% ( 57) 00:14:48.642 1.823 - 1.837: 6.4568% ( 986) 00:14:48.642 1.837 - 1.850: 12.1203% ( 921) 00:14:48.642 1.850 - 1.864: 14.2541% ( 347) 00:14:48.642 1.864 - 1.878: 17.3287% ( 500) 00:14:48.642 1.878 - 1.892: 55.9710% ( 6284) 00:14:48.642 1.892 - 1.906: 88.4824% ( 5287) 00:14:48.642 1.906 - 1.920: 94.5456% ( 986) 00:14:48.642 1.920 - 1.934: 97.1221% ( 419) 00:14:48.642 1.934 - 1.948: 97.7924% ( 109) 00:14:48.642 1.948 - 1.962: 98.4381% ( 105) 00:14:48.642 1.962 - 1.976: 99.0223% ( 95) 00:14:48.642 1.976 - 1.990: 99.2682% ( 40) 00:14:48.642 1.990 - 2.003: 99.3236% ( 9) 00:14:48.642 2.003 - 2.017: 99.3605% ( 6) 00:14:48.642 2.031 - 2.045: 99.3728% ( 2) 00:14:48.642 2.045 - 2.059: 99.3789% ( 1) 00:14:48.642 2.170 - 2.184: 99.3851% ( 1) 00:14:48.642 2.212 - 2.226: 99.3912% ( 1) 00:14:48.642 2.365 - 2.379: 99.3974% ( 1) 00:14:48.642 2.379 - 2.393: 99.4035% ( 1) 00:14:48.642 2.490 - 2.504: 99.4097% ( 1) 00:14:48.642 3.812 - 3.840: 99.4158% ( 1) 00:14:48.642 3.923 - 3.951: 99.4220% ( 1) 00:14:48.642 4.146 - 4.174: 99.4281% ( 1) 00:14:48.642 4.508 - 4.536: 99.4343% ( 1) 00:14:48.642 4.536 - 4.563: 99.4404% ( 1) 00:14:48.642 4.925 - 4.953: 99.4527% ( 2) 00:14:48.642 4.953 - 4.981: 99.4589% ( 1) 00:14:48.642 5.064 - 5.092: 99.4650% ( 1) 00:14:48.642 5.203 - 5.231: 99.4712% ( 1) 00:14:48.642 5.454 - 5.482: 99.4773% ( 1) 00:14:48.642 5.593 - 5.621: 99.4835% ( 1) 00:14:48.642 5.760 - 5.788: 99.4896% ( 1) 00:14:48.642 5.983 - 6.010: 99.4958% ( 1) 00:14:48.642 6.317 - 6.344: 99.5019% ( 1) 00:14:48.642 6.623 - 6.650: 99.5081% ( 1) 00:14:48.642 7.179 - 7.235: 99.5142% ( 1) 00:14:48.642 13.134 - 13.190: 99.5204% ( 1) 00:14:48.642 14.358 - 14.470: 99.5265% ( 1) 00:14:48.642 39.847 - 40.070: 99.5327% ( 1) 00:14:48.642 1210.991 - 1218.115: 99.5388% ( 1) 00:14:48.642 3989.148 - 4017.642: 99.9939% ( 74) 00:14:48.642 5983.722 - 6012.216: 100.0000% ( 1) 00:14:48.642 00:14:48.642 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:48.642 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:48.642 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:48.642 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:48.642 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:48.642 [ 00:14:48.642 { 00:14:48.642 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:48.642 "subtype": "Discovery", 00:14:48.642 "listen_addresses": [], 00:14:48.642 "allow_any_host": true, 00:14:48.642 "hosts": [] 00:14:48.642 }, 00:14:48.642 { 00:14:48.642 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:48.642 "subtype": "NVMe", 00:14:48.642 "listen_addresses": [ 00:14:48.642 { 00:14:48.642 "trtype": "VFIOUSER", 00:14:48.642 "adrfam": "IPv4", 00:14:48.642 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:48.642 "trsvcid": "0" 00:14:48.642 } 00:14:48.642 ], 00:14:48.642 "allow_any_host": true, 00:14:48.642 "hosts": [], 00:14:48.642 "serial_number": "SPDK1", 00:14:48.642 "model_number": "SPDK bdev Controller", 00:14:48.642 "max_namespaces": 32, 00:14:48.642 "min_cntlid": 1, 00:14:48.642 "max_cntlid": 65519, 00:14:48.642 "namespaces": [ 00:14:48.642 { 00:14:48.642 "nsid": 1, 00:14:48.642 "bdev_name": "Malloc1", 00:14:48.642 "name": "Malloc1", 00:14:48.642 "nguid": "BAD6359DFA2D47258B2F9CA0879BE1C2", 00:14:48.642 "uuid": "bad6359d-fa2d-4725-8b2f-9ca0879be1c2" 00:14:48.642 }, 00:14:48.642 { 00:14:48.642 "nsid": 2, 00:14:48.642 "bdev_name": "Malloc3", 00:14:48.642 "name": "Malloc3", 00:14:48.642 "nguid": "18042E37EBC744E687D6CCCDDC4AB258", 00:14:48.642 "uuid": "18042e37-ebc7-44e6-87d6-cccddc4ab258" 00:14:48.642 } 00:14:48.642 ] 00:14:48.642 }, 00:14:48.642 { 00:14:48.642 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:48.642 "subtype": "NVMe", 00:14:48.642 "listen_addresses": [ 00:14:48.642 { 00:14:48.642 "trtype": "VFIOUSER", 00:14:48.642 "adrfam": "IPv4", 00:14:48.642 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:48.642 "trsvcid": "0" 00:14:48.642 } 00:14:48.642 ], 00:14:48.642 "allow_any_host": true, 00:14:48.642 "hosts": [], 00:14:48.642 "serial_number": "SPDK2", 00:14:48.642 "model_number": "SPDK bdev Controller", 00:14:48.642 "max_namespaces": 32, 00:14:48.642 "min_cntlid": 1, 00:14:48.642 "max_cntlid": 65519, 00:14:48.642 "namespaces": [ 00:14:48.642 { 00:14:48.642 "nsid": 1, 00:14:48.642 "bdev_name": "Malloc2", 00:14:48.642 "name": "Malloc2", 00:14:48.642 "nguid": "B53B10F9E72A4DD6BA2689EE188B6FC8", 00:14:48.642 "uuid": "b53b10f9-e72a-4dd6-ba26-89ee188b6fc8" 00:14:48.642 } 00:14:48.642 ] 00:14:48.642 } 00:14:48.642 ] 00:14:48.642 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:48.642 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:48.642 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2419632 00:14:48.642 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:48.642 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:48.642 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:48.642 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:48.642 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:48.642 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:48.642 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:48.901 [2024-11-27 07:57:42.857405] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:48.901 Malloc4 00:14:48.901 07:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:49.159 [2024-11-27 07:57:43.083105] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:49.159 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:49.159 Asynchronous Event Request test 00:14:49.159 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:49.159 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:49.159 Registering asynchronous event callbacks... 00:14:49.159 Starting namespace attribute notice tests for all controllers... 00:14:49.159 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:49.159 aer_cb - Changed Namespace 00:14:49.159 Cleaning up... 00:14:49.419 [ 00:14:49.419 { 00:14:49.419 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:49.419 "subtype": "Discovery", 00:14:49.419 "listen_addresses": [], 00:14:49.419 "allow_any_host": true, 00:14:49.419 "hosts": [] 00:14:49.419 }, 00:14:49.419 { 00:14:49.419 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:49.419 "subtype": "NVMe", 00:14:49.419 "listen_addresses": [ 00:14:49.419 { 00:14:49.419 "trtype": "VFIOUSER", 00:14:49.419 "adrfam": "IPv4", 00:14:49.419 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:49.419 "trsvcid": "0" 00:14:49.419 } 00:14:49.419 ], 00:14:49.419 "allow_any_host": true, 00:14:49.419 "hosts": [], 00:14:49.419 "serial_number": "SPDK1", 00:14:49.419 "model_number": "SPDK bdev Controller", 00:14:49.419 "max_namespaces": 32, 00:14:49.419 "min_cntlid": 1, 00:14:49.419 "max_cntlid": 65519, 00:14:49.419 "namespaces": [ 00:14:49.419 { 00:14:49.419 "nsid": 1, 00:14:49.419 "bdev_name": "Malloc1", 00:14:49.419 "name": "Malloc1", 00:14:49.419 "nguid": "BAD6359DFA2D47258B2F9CA0879BE1C2", 00:14:49.419 "uuid": "bad6359d-fa2d-4725-8b2f-9ca0879be1c2" 00:14:49.419 }, 00:14:49.419 { 00:14:49.419 "nsid": 2, 00:14:49.419 "bdev_name": "Malloc3", 00:14:49.419 "name": "Malloc3", 00:14:49.419 "nguid": "18042E37EBC744E687D6CCCDDC4AB258", 00:14:49.419 "uuid": "18042e37-ebc7-44e6-87d6-cccddc4ab258" 00:14:49.419 } 00:14:49.419 ] 00:14:49.419 }, 00:14:49.419 { 00:14:49.419 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:49.419 "subtype": "NVMe", 00:14:49.419 "listen_addresses": [ 00:14:49.419 { 00:14:49.419 "trtype": "VFIOUSER", 00:14:49.419 "adrfam": "IPv4", 00:14:49.419 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:49.419 "trsvcid": "0" 00:14:49.419 } 00:14:49.419 ], 00:14:49.419 "allow_any_host": true, 00:14:49.419 "hosts": [], 00:14:49.419 "serial_number": "SPDK2", 00:14:49.419 "model_number": "SPDK bdev Controller", 00:14:49.419 "max_namespaces": 32, 00:14:49.419 "min_cntlid": 1, 00:14:49.419 "max_cntlid": 65519, 00:14:49.419 "namespaces": [ 00:14:49.419 { 00:14:49.419 "nsid": 1, 00:14:49.419 "bdev_name": "Malloc2", 00:14:49.419 "name": "Malloc2", 00:14:49.419 "nguid": "B53B10F9E72A4DD6BA2689EE188B6FC8", 00:14:49.419 "uuid": "b53b10f9-e72a-4dd6-ba26-89ee188b6fc8" 00:14:49.419 }, 00:14:49.419 { 00:14:49.419 "nsid": 2, 00:14:49.419 "bdev_name": "Malloc4", 00:14:49.419 "name": "Malloc4", 00:14:49.419 "nguid": "6498780E5CB845B3BA6C5AA2C275A5B3", 00:14:49.419 "uuid": "6498780e-5cb8-45b3-ba6c-5aa2c275a5b3" 00:14:49.419 } 00:14:49.419 ] 00:14:49.419 } 00:14:49.419 ] 00:14:49.419 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2419632 00:14:49.419 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:49.419 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2411495 00:14:49.419 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2411495 ']' 00:14:49.419 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2411495 00:14:49.419 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:49.419 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.419 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2411495 00:14:49.419 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:49.419 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:49.419 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2411495' 00:14:49.419 killing process with pid 2411495 00:14:49.419 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2411495 00:14:49.419 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2411495 00:14:49.679 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:49.679 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:49.679 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:49.679 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:49.679 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:49.679 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2419808 00:14:49.679 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:49.679 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2419808' 00:14:49.679 Process pid: 2419808 00:14:49.679 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:49.679 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2419808 00:14:49.679 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2419808 ']' 00:14:49.679 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.679 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.679 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.679 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.679 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:49.679 [2024-11-27 07:57:43.656891] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:49.679 [2024-11-27 07:57:43.657813] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:14:49.679 [2024-11-27 07:57:43.657856] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.679 [2024-11-27 07:57:43.722322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:49.679 [2024-11-27 07:57:43.762599] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.679 [2024-11-27 07:57:43.762642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.679 [2024-11-27 07:57:43.762650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.679 [2024-11-27 07:57:43.762657] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.679 [2024-11-27 07:57:43.762662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.679 [2024-11-27 07:57:43.764248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.679 [2024-11-27 07:57:43.764342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.679 [2024-11-27 07:57:43.764454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:49.679 [2024-11-27 07:57:43.764455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.939 [2024-11-27 07:57:43.832534] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:49.939 [2024-11-27 07:57:43.832670] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:49.939 [2024-11-27 07:57:43.832776] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:49.939 [2024-11-27 07:57:43.832994] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:49.939 [2024-11-27 07:57:43.833176] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:49.939 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.939 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:49.939 07:57:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:50.875 07:57:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:51.134 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:51.134 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:51.134 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:51.134 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:51.134 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:51.394 Malloc1 00:14:51.394 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:51.394 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:51.653 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:51.912 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:51.912 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:51.912 07:57:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:52.170 Malloc2 00:14:52.170 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:52.428 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:52.428 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:52.687 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:52.687 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2419808 00:14:52.687 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2419808 ']' 00:14:52.687 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2419808 00:14:52.687 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:52.687 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.687 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2419808 00:14:52.687 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:52.687 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:52.687 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2419808' 00:14:52.687 killing process with pid 2419808 00:14:52.687 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2419808 00:14:52.687 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2419808 00:14:52.947 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:52.947 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:52.947 00:14:52.947 real 0m50.931s 00:14:52.947 user 3m17.139s 00:14:52.947 sys 0m3.296s 00:14:52.947 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.947 07:57:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:52.947 ************************************ 00:14:52.947 END TEST nvmf_vfio_user 00:14:52.947 ************************************ 00:14:52.947 07:57:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:52.947 07:57:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:52.947 07:57:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.947 07:57:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:52.947 ************************************ 00:14:52.947 START TEST nvmf_vfio_user_nvme_compliance 00:14:52.947 ************************************ 00:14:52.947 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:53.206 * Looking for test storage... 00:14:53.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:53.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.206 --rc genhtml_branch_coverage=1 00:14:53.206 --rc genhtml_function_coverage=1 00:14:53.206 --rc genhtml_legend=1 00:14:53.206 --rc geninfo_all_blocks=1 00:14:53.206 --rc geninfo_unexecuted_blocks=1 00:14:53.206 00:14:53.206 ' 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:53.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.206 --rc genhtml_branch_coverage=1 00:14:53.206 --rc genhtml_function_coverage=1 00:14:53.206 --rc genhtml_legend=1 00:14:53.206 --rc geninfo_all_blocks=1 00:14:53.206 --rc geninfo_unexecuted_blocks=1 00:14:53.206 00:14:53.206 ' 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:53.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.206 --rc genhtml_branch_coverage=1 00:14:53.206 --rc genhtml_function_coverage=1 00:14:53.206 --rc genhtml_legend=1 00:14:53.206 --rc geninfo_all_blocks=1 00:14:53.206 --rc geninfo_unexecuted_blocks=1 00:14:53.206 00:14:53.206 ' 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:53.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.206 --rc genhtml_branch_coverage=1 00:14:53.206 --rc genhtml_function_coverage=1 00:14:53.206 --rc genhtml_legend=1 00:14:53.206 --rc geninfo_all_blocks=1 00:14:53.206 --rc geninfo_unexecuted_blocks=1 00:14:53.206 00:14:53.206 ' 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.206 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:53.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2420411 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2420411' 00:14:53.207 Process pid: 2420411 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2420411 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2420411 ']' 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.207 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:53.207 [2024-11-27 07:57:47.270036] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:14:53.207 [2024-11-27 07:57:47.270085] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.469 [2024-11-27 07:57:47.332878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:53.469 [2024-11-27 07:57:47.372608] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.469 [2024-11-27 07:57:47.372647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.469 [2024-11-27 07:57:47.372653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.469 [2024-11-27 07:57:47.372659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.469 [2024-11-27 07:57:47.372663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.469 [2024-11-27 07:57:47.373974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.469 [2024-11-27 07:57:47.374069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.469 [2024-11-27 07:57:47.374071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.469 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.469 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:53.469 07:57:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:54.405 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:54.405 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:54.405 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:54.405 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.405 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:54.406 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.406 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:54.406 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:54.406 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.406 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:54.665 malloc0 00:14:54.665 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.665 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:54.665 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.665 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:54.665 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.665 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:54.665 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.665 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:54.665 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.665 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:54.665 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.665 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:54.665 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.665 07:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:54.665 00:14:54.665 00:14:54.665 CUnit - A unit testing framework for C - Version 2.1-3 00:14:54.665 http://cunit.sourceforge.net/ 00:14:54.665 00:14:54.665 00:14:54.665 Suite: nvme_compliance 00:14:54.665 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-27 07:57:48.711403] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:54.665 [2024-11-27 07:57:48.712771] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:54.665 [2024-11-27 07:57:48.712787] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:54.665 [2024-11-27 07:57:48.712793] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:54.665 [2024-11-27 07:57:48.714421] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:54.665 passed 00:14:54.924 Test: admin_identify_ctrlr_verify_fused ...[2024-11-27 07:57:48.794002] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:54.924 [2024-11-27 07:57:48.797003] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:54.924 passed 00:14:54.924 Test: admin_identify_ns ...[2024-11-27 07:57:48.876415] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:54.924 [2024-11-27 07:57:48.939959] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:54.924 [2024-11-27 07:57:48.947965] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:54.924 [2024-11-27 07:57:48.969044] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:54.924 passed 00:14:55.184 Test: admin_get_features_mandatory_features ...[2024-11-27 07:57:49.040348] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:55.184 [2024-11-27 07:57:49.043368] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:55.184 passed 00:14:55.184 Test: admin_get_features_optional_features ...[2024-11-27 07:57:49.122875] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:55.184 [2024-11-27 07:57:49.125900] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:55.184 passed 00:14:55.184 Test: admin_set_features_number_of_queues ...[2024-11-27 07:57:49.203879] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:55.443 [2024-11-27 07:57:49.310042] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:55.443 passed 00:14:55.443 Test: admin_get_log_page_mandatory_logs ...[2024-11-27 07:57:49.386084] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:55.443 [2024-11-27 07:57:49.389100] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:55.443 passed 00:14:55.443 Test: admin_get_log_page_with_lpo ...[2024-11-27 07:57:49.466044] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:55.443 [2024-11-27 07:57:49.534958] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:55.443 [2024-11-27 07:57:49.548001] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:55.701 passed 00:14:55.701 Test: fabric_property_get ...[2024-11-27 07:57:49.625112] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:55.701 [2024-11-27 07:57:49.626347] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:55.701 [2024-11-27 07:57:49.628132] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:55.701 passed 00:14:55.701 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-27 07:57:49.705637] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:55.701 [2024-11-27 07:57:49.706879] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:55.701 [2024-11-27 07:57:49.708653] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:55.701 passed 00:14:55.701 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-27 07:57:49.786585] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:55.959 [2024-11-27 07:57:49.870958] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:55.959 [2024-11-27 07:57:49.886955] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:55.959 [2024-11-27 07:57:49.892044] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:55.959 passed 00:14:55.959 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-27 07:57:49.967180] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:55.959 [2024-11-27 07:57:49.968419] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:55.959 [2024-11-27 07:57:49.971208] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:55.959 passed 00:14:55.959 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-27 07:57:50.046534] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:56.304 [2024-11-27 07:57:50.118956] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:56.304 [2024-11-27 07:57:50.141957] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:56.304 [2024-11-27 07:57:50.147084] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:56.304 passed 00:14:56.304 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-27 07:57:50.227079] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:56.304 [2024-11-27 07:57:50.228317] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:56.304 [2024-11-27 07:57:50.228343] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:56.304 [2024-11-27 07:57:50.230101] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:56.304 passed 00:14:56.304 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-27 07:57:50.308528] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:56.597 [2024-11-27 07:57:50.400958] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:56.597 [2024-11-27 07:57:50.408960] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:56.597 [2024-11-27 07:57:50.416954] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:56.597 [2024-11-27 07:57:50.424968] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:56.597 [2024-11-27 07:57:50.454044] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:56.597 passed 00:14:56.597 Test: admin_create_io_sq_verify_pc ...[2024-11-27 07:57:50.529335] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:56.597 [2024-11-27 07:57:50.545963] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:56.597 [2024-11-27 07:57:50.563503] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:56.597 passed 00:14:56.597 Test: admin_create_io_qp_max_qps ...[2024-11-27 07:57:50.643047] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.677 [2024-11-27 07:57:51.760959] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:58.246 [2024-11-27 07:57:52.133668] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.246 passed 00:14:58.246 Test: admin_create_io_sq_shared_cq ...[2024-11-27 07:57:52.211784] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.246 [2024-11-27 07:57:52.345953] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:58.506 [2024-11-27 07:57:52.383017] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.506 passed 00:14:58.506 00:14:58.506 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.506 suites 1 1 n/a 0 0 00:14:58.506 tests 18 18 18 0 0 00:14:58.506 asserts 360 360 360 0 n/a 00:14:58.506 00:14:58.506 Elapsed time = 1.514 seconds 00:14:58.506 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2420411 00:14:58.506 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2420411 ']' 00:14:58.506 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2420411 00:14:58.506 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:58.506 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:58.506 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2420411 00:14:58.506 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:58.506 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:58.506 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2420411' 00:14:58.506 killing process with pid 2420411 00:14:58.506 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2420411 00:14:58.506 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2420411 00:14:58.766 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:58.766 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:58.766 00:14:58.766 real 0m5.645s 00:14:58.766 user 0m15.804s 00:14:58.766 sys 0m0.512s 00:14:58.766 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.766 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:58.766 ************************************ 00:14:58.766 END TEST nvmf_vfio_user_nvme_compliance 00:14:58.766 ************************************ 00:14:58.766 07:57:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:58.766 07:57:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:58.766 07:57:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.766 07:57:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:58.766 ************************************ 00:14:58.766 START TEST nvmf_vfio_user_fuzz 00:14:58.766 ************************************ 00:14:58.766 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:58.766 * Looking for test storage... 00:14:58.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.766 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:58.766 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:14:58.766 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:59.026 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:59.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.027 --rc genhtml_branch_coverage=1 00:14:59.027 --rc genhtml_function_coverage=1 00:14:59.027 --rc genhtml_legend=1 00:14:59.027 --rc geninfo_all_blocks=1 00:14:59.027 --rc geninfo_unexecuted_blocks=1 00:14:59.027 00:14:59.027 ' 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:59.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.027 --rc genhtml_branch_coverage=1 00:14:59.027 --rc genhtml_function_coverage=1 00:14:59.027 --rc genhtml_legend=1 00:14:59.027 --rc geninfo_all_blocks=1 00:14:59.027 --rc geninfo_unexecuted_blocks=1 00:14:59.027 00:14:59.027 ' 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:59.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.027 --rc genhtml_branch_coverage=1 00:14:59.027 --rc genhtml_function_coverage=1 00:14:59.027 --rc genhtml_legend=1 00:14:59.027 --rc geninfo_all_blocks=1 00:14:59.027 --rc geninfo_unexecuted_blocks=1 00:14:59.027 00:14:59.027 ' 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:59.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.027 --rc genhtml_branch_coverage=1 00:14:59.027 --rc genhtml_function_coverage=1 00:14:59.027 --rc genhtml_legend=1 00:14:59.027 --rc geninfo_all_blocks=1 00:14:59.027 --rc geninfo_unexecuted_blocks=1 00:14:59.027 00:14:59.027 ' 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:59.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2421406 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2421406' 00:14:59.027 Process pid: 2421406 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2421406 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2421406 ']' 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:59.027 07:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:59.287 07:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.287 07:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:59.287 07:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:00.231 malloc0 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:00.231 07:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:32.324 Fuzzing completed. Shutting down the fuzz application 00:15:32.324 00:15:32.324 Dumping successful admin opcodes: 00:15:32.324 9, 10, 00:15:32.324 Dumping successful io opcodes: 00:15:32.324 0, 00:15:32.324 NS: 0x20000081ef00 I/O qp, Total commands completed: 995679, total successful commands: 3896, random_seed: 3896849408 00:15:32.324 NS: 0x20000081ef00 admin qp, Total commands completed: 246480, total successful commands: 57, random_seed: 4237209920 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2421406 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2421406 ']' 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2421406 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2421406 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2421406' 00:15:32.324 killing process with pid 2421406 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2421406 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2421406 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:32.324 00:15:32.324 real 0m32.175s 00:15:32.324 user 0m29.742s 00:15:32.324 sys 0m30.945s 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.324 ************************************ 00:15:32.324 END TEST nvmf_vfio_user_fuzz 00:15:32.324 ************************************ 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:32.324 ************************************ 00:15:32.324 START TEST nvmf_auth_target 00:15:32.324 ************************************ 00:15:32.324 07:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:32.324 * Looking for test storage... 00:15:32.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:32.324 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:32.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.325 --rc genhtml_branch_coverage=1 00:15:32.325 --rc genhtml_function_coverage=1 00:15:32.325 --rc genhtml_legend=1 00:15:32.325 --rc geninfo_all_blocks=1 00:15:32.325 --rc geninfo_unexecuted_blocks=1 00:15:32.325 00:15:32.325 ' 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:32.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.325 --rc genhtml_branch_coverage=1 00:15:32.325 --rc genhtml_function_coverage=1 00:15:32.325 --rc genhtml_legend=1 00:15:32.325 --rc geninfo_all_blocks=1 00:15:32.325 --rc geninfo_unexecuted_blocks=1 00:15:32.325 00:15:32.325 ' 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:32.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.325 --rc genhtml_branch_coverage=1 00:15:32.325 --rc genhtml_function_coverage=1 00:15:32.325 --rc genhtml_legend=1 00:15:32.325 --rc geninfo_all_blocks=1 00:15:32.325 --rc geninfo_unexecuted_blocks=1 00:15:32.325 00:15:32.325 ' 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:32.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.325 --rc genhtml_branch_coverage=1 00:15:32.325 --rc genhtml_function_coverage=1 00:15:32.325 --rc genhtml_legend=1 00:15:32.325 --rc geninfo_all_blocks=1 00:15:32.325 --rc geninfo_unexecuted_blocks=1 00:15:32.325 00:15:32.325 ' 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:32.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:32.325 07:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:36.524 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.524 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:36.525 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:36.525 Found net devices under 0000:86:00.0: cvl_0_0 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:36.525 Found net devices under 0000:86:00.1: cvl_0_1 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.525 07:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:36.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:15:36.525 00:15:36.525 --- 10.0.0.2 ping statistics --- 00:15:36.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.525 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:15:36.525 00:15:36.525 --- 10.0.0.1 ping statistics --- 00:15:36.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.525 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:36.525 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2429699 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2429699 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2429699 ']' 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2429729 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b7da24dda87527f4ef72c7256c51a94bc8c877a0000489c9 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.k8O 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b7da24dda87527f4ef72c7256c51a94bc8c877a0000489c9 0 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b7da24dda87527f4ef72c7256c51a94bc8c877a0000489c9 0 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b7da24dda87527f4ef72c7256c51a94bc8c877a0000489c9 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.k8O 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.k8O 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.k8O 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b2dd95e240c0134e263040f081e7fb5ca38c3ef9e5e22f88af68ad89831719a5 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.bGF 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b2dd95e240c0134e263040f081e7fb5ca38c3ef9e5e22f88af68ad89831719a5 3 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b2dd95e240c0134e263040f081e7fb5ca38c3ef9e5e22f88af68ad89831719a5 3 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b2dd95e240c0134e263040f081e7fb5ca38c3ef9e5e22f88af68ad89831719a5 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:36.526 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.bGF 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.bGF 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.bGF 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3b0e7c6d8e1cb772b96c3981ff0405b8 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.7kQ 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3b0e7c6d8e1cb772b96c3981ff0405b8 1 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3b0e7c6d8e1cb772b96c3981ff0405b8 1 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3b0e7c6d8e1cb772b96c3981ff0405b8 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.7kQ 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.7kQ 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.7kQ 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b69cec4684bc5f7b5d288c0bdfa42e85982efc151e551427 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xAR 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b69cec4684bc5f7b5d288c0bdfa42e85982efc151e551427 2 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b69cec4684bc5f7b5d288c0bdfa42e85982efc151e551427 2 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b69cec4684bc5f7b5d288c0bdfa42e85982efc151e551427 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xAR 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xAR 00:15:36.787 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.xAR 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0f93596d3721e21a59f637aee6a7b5ab5df1d0f2343399f8 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.GDc 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0f93596d3721e21a59f637aee6a7b5ab5df1d0f2343399f8 2 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0f93596d3721e21a59f637aee6a7b5ab5df1d0f2343399f8 2 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0f93596d3721e21a59f637aee6a7b5ab5df1d0f2343399f8 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.GDc 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.GDc 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.GDc 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c2e03486f89c210ec83232f3adb566e6 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.oxD 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c2e03486f89c210ec83232f3adb566e6 1 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c2e03486f89c210ec83232f3adb566e6 1 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c2e03486f89c210ec83232f3adb566e6 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.oxD 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.oxD 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.oxD 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:36.788 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3fa486abe98965f359d04a9da490f21075c2ef1a88056e975b943ba6be4019b1 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.rgr 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3fa486abe98965f359d04a9da490f21075c2ef1a88056e975b943ba6be4019b1 3 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3fa486abe98965f359d04a9da490f21075c2ef1a88056e975b943ba6be4019b1 3 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3fa486abe98965f359d04a9da490f21075c2ef1a88056e975b943ba6be4019b1 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.rgr 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.rgr 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.rgr 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2429699 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2429699 ']' 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.047 07:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.047 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.047 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:37.047 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2429729 /var/tmp/host.sock 00:15:37.047 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2429729 ']' 00:15:37.047 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:37.047 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.047 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:37.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:37.047 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.047 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.306 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.306 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:37.306 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:37.306 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.306 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.306 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.306 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:37.306 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.k8O 00:15:37.306 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.306 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.306 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.306 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.k8O 00:15:37.306 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.k8O 00:15:37.566 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.bGF ]] 00:15:37.566 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bGF 00:15:37.566 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.566 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.566 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.566 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bGF 00:15:37.566 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bGF 00:15:37.824 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:37.824 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.7kQ 00:15:37.824 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.824 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.824 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.824 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.7kQ 00:15:37.825 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.7kQ 00:15:38.084 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.xAR ]] 00:15:38.084 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xAR 00:15:38.084 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.084 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.084 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.084 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xAR 00:15:38.084 07:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xAR 00:15:38.084 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:38.084 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.GDc 00:15:38.084 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.084 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.084 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.084 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.GDc 00:15:38.084 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.GDc 00:15:38.344 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.oxD ]] 00:15:38.344 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oxD 00:15:38.344 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.344 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.345 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.345 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oxD 00:15:38.345 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oxD 00:15:38.605 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:38.605 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.rgr 00:15:38.605 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.605 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.605 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.605 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.rgr 00:15:38.605 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.rgr 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.864 07:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.123 00:15:39.123 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.123 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.381 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.381 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.381 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.381 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.381 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.381 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.381 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.381 { 00:15:39.381 "cntlid": 1, 00:15:39.381 "qid": 0, 00:15:39.381 "state": "enabled", 00:15:39.381 "thread": "nvmf_tgt_poll_group_000", 00:15:39.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:39.381 "listen_address": { 00:15:39.381 "trtype": "TCP", 00:15:39.381 "adrfam": "IPv4", 00:15:39.381 "traddr": "10.0.0.2", 00:15:39.381 "trsvcid": "4420" 00:15:39.381 }, 00:15:39.381 "peer_address": { 00:15:39.381 "trtype": "TCP", 00:15:39.381 "adrfam": "IPv4", 00:15:39.381 "traddr": "10.0.0.1", 00:15:39.381 "trsvcid": "57596" 00:15:39.381 }, 00:15:39.381 "auth": { 00:15:39.381 "state": "completed", 00:15:39.381 "digest": "sha256", 00:15:39.381 "dhgroup": "null" 00:15:39.381 } 00:15:39.381 } 00:15:39.381 ]' 00:15:39.381 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.381 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.381 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.640 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:39.640 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.640 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.640 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.640 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.899 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:15:39.899 07:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.467 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.726 00:15:40.726 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.726 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.726 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.984 07:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.984 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.984 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.984 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.984 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.984 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.984 { 00:15:40.984 "cntlid": 3, 00:15:40.984 "qid": 0, 00:15:40.984 "state": "enabled", 00:15:40.984 "thread": "nvmf_tgt_poll_group_000", 00:15:40.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:40.984 "listen_address": { 00:15:40.984 "trtype": "TCP", 00:15:40.984 "adrfam": "IPv4", 00:15:40.984 "traddr": "10.0.0.2", 00:15:40.984 "trsvcid": "4420" 00:15:40.984 }, 00:15:40.984 "peer_address": { 00:15:40.984 "trtype": "TCP", 00:15:40.984 "adrfam": "IPv4", 00:15:40.984 "traddr": "10.0.0.1", 00:15:40.984 "trsvcid": "36278" 00:15:40.984 }, 00:15:40.984 "auth": { 00:15:40.984 "state": "completed", 00:15:40.984 "digest": "sha256", 00:15:40.984 "dhgroup": "null" 00:15:40.984 } 00:15:40.984 } 00:15:40.984 ]' 00:15:40.984 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.984 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.984 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.242 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:41.242 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.242 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.242 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.242 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.242 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:15:41.242 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:15:41.810 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.810 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:41.810 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.810 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.810 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.810 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.810 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:41.810 07:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:42.068 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:42.068 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.068 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:42.068 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:42.068 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:42.068 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.068 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.069 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.069 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.069 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.069 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.069 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.069 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.328 00:15:42.328 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.328 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.328 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.588 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.588 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.588 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.588 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.588 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.588 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.588 { 00:15:42.588 "cntlid": 5, 00:15:42.588 "qid": 0, 00:15:42.588 "state": "enabled", 00:15:42.588 "thread": "nvmf_tgt_poll_group_000", 00:15:42.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:42.588 "listen_address": { 00:15:42.588 "trtype": "TCP", 00:15:42.588 "adrfam": "IPv4", 00:15:42.588 "traddr": "10.0.0.2", 00:15:42.588 "trsvcid": "4420" 00:15:42.588 }, 00:15:42.588 "peer_address": { 00:15:42.588 "trtype": "TCP", 00:15:42.588 "adrfam": "IPv4", 00:15:42.588 "traddr": "10.0.0.1", 00:15:42.588 "trsvcid": "36300" 00:15:42.588 }, 00:15:42.588 "auth": { 00:15:42.588 "state": "completed", 00:15:42.588 "digest": "sha256", 00:15:42.588 "dhgroup": "null" 00:15:42.588 } 00:15:42.588 } 00:15:42.588 ]' 00:15:42.588 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.588 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.588 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.588 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:42.588 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.847 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.847 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.847 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.847 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:15:42.847 07:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:15:43.415 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.415 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.415 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.415 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.415 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.415 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.415 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:43.415 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:43.674 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:43.674 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.674 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.674 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:43.674 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:43.674 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.674 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:43.675 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.675 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.675 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.675 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:43.675 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.675 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.934 00:15:43.934 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.934 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.934 07:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.193 07:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.193 07:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.193 07:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.193 07:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.193 07:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.193 07:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.193 { 00:15:44.193 "cntlid": 7, 00:15:44.193 "qid": 0, 00:15:44.193 "state": "enabled", 00:15:44.193 "thread": "nvmf_tgt_poll_group_000", 00:15:44.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:44.193 "listen_address": { 00:15:44.193 "trtype": "TCP", 00:15:44.193 "adrfam": "IPv4", 00:15:44.193 "traddr": "10.0.0.2", 00:15:44.193 "trsvcid": "4420" 00:15:44.193 }, 00:15:44.193 "peer_address": { 00:15:44.193 "trtype": "TCP", 00:15:44.193 "adrfam": "IPv4", 00:15:44.193 "traddr": "10.0.0.1", 00:15:44.193 "trsvcid": "36334" 00:15:44.193 }, 00:15:44.193 "auth": { 00:15:44.193 "state": "completed", 00:15:44.193 "digest": "sha256", 00:15:44.193 "dhgroup": "null" 00:15:44.193 } 00:15:44.193 } 00:15:44.193 ]' 00:15:44.193 07:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.193 07:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.193 07:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.193 07:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:44.193 07:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.193 07:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.193 07:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.193 07:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.453 07:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:15:44.453 07:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:15:45.023 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.023 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.023 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.023 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.023 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.023 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:45.023 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.023 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:45.023 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:45.283 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:45.283 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.283 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:45.283 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:45.283 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:45.283 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.283 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.283 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.283 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.283 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.283 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.283 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.283 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.542 00:15:45.542 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.542 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.542 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.801 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.801 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.801 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.801 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.801 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.801 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.801 { 00:15:45.801 "cntlid": 9, 00:15:45.801 "qid": 0, 00:15:45.801 "state": "enabled", 00:15:45.801 "thread": "nvmf_tgt_poll_group_000", 00:15:45.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:45.801 "listen_address": { 00:15:45.801 "trtype": "TCP", 00:15:45.801 "adrfam": "IPv4", 00:15:45.801 "traddr": "10.0.0.2", 00:15:45.801 "trsvcid": "4420" 00:15:45.801 }, 00:15:45.801 "peer_address": { 00:15:45.801 "trtype": "TCP", 00:15:45.801 "adrfam": "IPv4", 00:15:45.801 "traddr": "10.0.0.1", 00:15:45.801 "trsvcid": "36354" 00:15:45.801 }, 00:15:45.801 "auth": { 00:15:45.801 "state": "completed", 00:15:45.801 "digest": "sha256", 00:15:45.801 "dhgroup": "ffdhe2048" 00:15:45.801 } 00:15:45.801 } 00:15:45.801 ]' 00:15:45.801 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.802 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.802 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.802 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.802 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.802 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.802 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.802 07:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.061 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:15:46.061 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:15:46.627 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.627 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.627 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.627 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.627 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.627 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.627 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:46.627 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:46.887 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:46.887 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.887 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:46.887 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:46.887 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:46.887 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.887 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.887 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.887 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.887 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.887 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.887 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.887 07:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.146 00:15:47.146 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.146 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.146 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.405 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.405 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.405 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.405 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.405 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.405 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.405 { 00:15:47.405 "cntlid": 11, 00:15:47.405 "qid": 0, 00:15:47.405 "state": "enabled", 00:15:47.405 "thread": "nvmf_tgt_poll_group_000", 00:15:47.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:47.405 "listen_address": { 00:15:47.405 "trtype": "TCP", 00:15:47.405 "adrfam": "IPv4", 00:15:47.405 "traddr": "10.0.0.2", 00:15:47.405 "trsvcid": "4420" 00:15:47.405 }, 00:15:47.405 "peer_address": { 00:15:47.405 "trtype": "TCP", 00:15:47.405 "adrfam": "IPv4", 00:15:47.405 "traddr": "10.0.0.1", 00:15:47.405 "trsvcid": "36378" 00:15:47.405 }, 00:15:47.405 "auth": { 00:15:47.405 "state": "completed", 00:15:47.405 "digest": "sha256", 00:15:47.405 "dhgroup": "ffdhe2048" 00:15:47.405 } 00:15:47.405 } 00:15:47.405 ]' 00:15:47.405 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.405 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.405 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.405 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:47.405 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.405 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.405 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.405 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.665 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:15:47.665 07:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:15:48.234 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.234 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:48.234 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.234 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.234 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.234 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.234 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:48.234 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:48.493 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:48.493 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.493 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.493 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:48.493 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:48.493 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.493 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.493 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.493 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.493 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.493 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.493 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.493 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.752 00:15:48.753 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.753 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.753 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.012 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.012 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.012 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.012 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.012 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.012 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.012 { 00:15:49.012 "cntlid": 13, 00:15:49.012 "qid": 0, 00:15:49.012 "state": "enabled", 00:15:49.012 "thread": "nvmf_tgt_poll_group_000", 00:15:49.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:49.012 "listen_address": { 00:15:49.012 "trtype": "TCP", 00:15:49.012 "adrfam": "IPv4", 00:15:49.012 "traddr": "10.0.0.2", 00:15:49.012 "trsvcid": "4420" 00:15:49.012 }, 00:15:49.012 "peer_address": { 00:15:49.012 "trtype": "TCP", 00:15:49.012 "adrfam": "IPv4", 00:15:49.012 "traddr": "10.0.0.1", 00:15:49.012 "trsvcid": "36396" 00:15:49.012 }, 00:15:49.012 "auth": { 00:15:49.012 "state": "completed", 00:15:49.012 "digest": "sha256", 00:15:49.012 "dhgroup": "ffdhe2048" 00:15:49.012 } 00:15:49.012 } 00:15:49.012 ]' 00:15:49.012 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.012 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.012 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.012 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:49.012 07:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.012 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.012 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.012 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.271 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:15:49.271 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:15:49.840 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.840 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.840 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.840 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.840 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.840 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.840 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:49.840 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:50.100 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:50.100 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.100 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.100 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:50.100 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:50.100 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.100 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:50.100 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.100 07:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.100 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.100 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:50.100 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.100 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.359 00:15:50.359 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.359 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.359 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.359 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.359 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.359 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.359 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.359 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.359 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.359 { 00:15:50.359 "cntlid": 15, 00:15:50.359 "qid": 0, 00:15:50.359 "state": "enabled", 00:15:50.359 "thread": "nvmf_tgt_poll_group_000", 00:15:50.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:50.359 "listen_address": { 00:15:50.359 "trtype": "TCP", 00:15:50.359 "adrfam": "IPv4", 00:15:50.359 "traddr": "10.0.0.2", 00:15:50.359 "trsvcid": "4420" 00:15:50.359 }, 00:15:50.359 "peer_address": { 00:15:50.359 "trtype": "TCP", 00:15:50.359 "adrfam": "IPv4", 00:15:50.359 "traddr": "10.0.0.1", 00:15:50.359 "trsvcid": "36196" 00:15:50.359 }, 00:15:50.359 "auth": { 00:15:50.359 "state": "completed", 00:15:50.359 "digest": "sha256", 00:15:50.359 "dhgroup": "ffdhe2048" 00:15:50.359 } 00:15:50.359 } 00:15:50.359 ]' 00:15:50.359 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.618 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.618 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.618 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:50.618 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.618 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.618 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.618 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.878 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:15:50.878 07:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:15:51.447 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.447 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:51.447 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.447 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.447 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.447 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.447 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.447 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.447 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.706 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:51.706 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.706 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:51.706 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:51.707 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:51.707 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.707 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.707 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.707 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.707 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.707 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.707 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.707 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.966 00:15:51.966 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.966 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.966 07:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.967 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.967 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.967 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.967 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.967 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.967 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.967 { 00:15:51.967 "cntlid": 17, 00:15:51.967 "qid": 0, 00:15:51.967 "state": "enabled", 00:15:51.967 "thread": "nvmf_tgt_poll_group_000", 00:15:51.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:51.967 "listen_address": { 00:15:51.967 "trtype": "TCP", 00:15:51.967 "adrfam": "IPv4", 00:15:51.967 "traddr": "10.0.0.2", 00:15:51.967 "trsvcid": "4420" 00:15:51.967 }, 00:15:51.967 "peer_address": { 00:15:51.967 "trtype": "TCP", 00:15:51.967 "adrfam": "IPv4", 00:15:51.967 "traddr": "10.0.0.1", 00:15:51.967 "trsvcid": "36222" 00:15:51.967 }, 00:15:51.967 "auth": { 00:15:51.967 "state": "completed", 00:15:51.967 "digest": "sha256", 00:15:51.967 "dhgroup": "ffdhe3072" 00:15:51.967 } 00:15:51.967 } 00:15:51.967 ]' 00:15:51.967 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.225 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.225 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.225 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.225 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.225 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.225 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.226 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.484 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:15:52.485 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:15:53.053 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.053 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.053 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.053 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.053 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.053 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.053 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:53.053 07:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:53.053 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:53.053 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.053 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:53.053 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:53.053 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:53.053 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.053 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.053 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.053 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.312 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.312 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.312 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.312 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.312 00:15:53.572 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.572 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.572 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.572 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.572 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.572 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.572 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.572 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.572 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.572 { 00:15:53.572 "cntlid": 19, 00:15:53.572 "qid": 0, 00:15:53.572 "state": "enabled", 00:15:53.572 "thread": "nvmf_tgt_poll_group_000", 00:15:53.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:53.572 "listen_address": { 00:15:53.572 "trtype": "TCP", 00:15:53.572 "adrfam": "IPv4", 00:15:53.572 "traddr": "10.0.0.2", 00:15:53.572 "trsvcid": "4420" 00:15:53.572 }, 00:15:53.572 "peer_address": { 00:15:53.572 "trtype": "TCP", 00:15:53.572 "adrfam": "IPv4", 00:15:53.572 "traddr": "10.0.0.1", 00:15:53.572 "trsvcid": "36244" 00:15:53.572 }, 00:15:53.572 "auth": { 00:15:53.572 "state": "completed", 00:15:53.572 "digest": "sha256", 00:15:53.572 "dhgroup": "ffdhe3072" 00:15:53.572 } 00:15:53.572 } 00:15:53.572 ]' 00:15:53.572 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.572 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.831 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.831 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:53.831 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.831 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.831 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.831 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.091 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:15:54.091 07:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.680 07:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.954 00:15:54.954 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.954 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.954 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.226 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.226 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.226 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.226 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.226 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.226 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.226 { 00:15:55.226 "cntlid": 21, 00:15:55.226 "qid": 0, 00:15:55.226 "state": "enabled", 00:15:55.226 "thread": "nvmf_tgt_poll_group_000", 00:15:55.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:55.226 "listen_address": { 00:15:55.226 "trtype": "TCP", 00:15:55.226 "adrfam": "IPv4", 00:15:55.226 "traddr": "10.0.0.2", 00:15:55.226 "trsvcid": "4420" 00:15:55.226 }, 00:15:55.226 "peer_address": { 00:15:55.226 "trtype": "TCP", 00:15:55.226 "adrfam": "IPv4", 00:15:55.226 "traddr": "10.0.0.1", 00:15:55.226 "trsvcid": "36274" 00:15:55.226 }, 00:15:55.226 "auth": { 00:15:55.226 "state": "completed", 00:15:55.226 "digest": "sha256", 00:15:55.226 "dhgroup": "ffdhe3072" 00:15:55.226 } 00:15:55.226 } 00:15:55.226 ]' 00:15:55.226 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.226 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.226 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.226 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:55.226 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.505 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.505 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.505 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.505 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:15:55.505 07:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:15:56.074 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.074 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.074 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.074 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.074 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.074 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.074 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:56.074 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:56.332 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:56.332 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.332 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:56.333 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:56.333 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:56.333 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.333 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:56.333 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.333 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.333 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.333 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:56.333 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.333 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.592 00:15:56.592 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.592 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.592 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.851 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.851 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.852 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.852 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.852 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.852 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.852 { 00:15:56.852 "cntlid": 23, 00:15:56.852 "qid": 0, 00:15:56.852 "state": "enabled", 00:15:56.852 "thread": "nvmf_tgt_poll_group_000", 00:15:56.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:56.852 "listen_address": { 00:15:56.852 "trtype": "TCP", 00:15:56.852 "adrfam": "IPv4", 00:15:56.852 "traddr": "10.0.0.2", 00:15:56.852 "trsvcid": "4420" 00:15:56.852 }, 00:15:56.852 "peer_address": { 00:15:56.852 "trtype": "TCP", 00:15:56.852 "adrfam": "IPv4", 00:15:56.852 "traddr": "10.0.0.1", 00:15:56.852 "trsvcid": "36304" 00:15:56.852 }, 00:15:56.852 "auth": { 00:15:56.852 "state": "completed", 00:15:56.852 "digest": "sha256", 00:15:56.852 "dhgroup": "ffdhe3072" 00:15:56.852 } 00:15:56.852 } 00:15:56.852 ]' 00:15:56.852 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.852 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.852 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.852 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:56.852 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.852 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.852 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.852 07:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.111 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:15:57.111 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:15:57.681 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.681 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:57.681 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.681 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.681 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.681 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.681 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.681 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:57.681 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:57.940 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:57.940 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.940 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:57.940 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:57.940 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:57.940 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.940 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.940 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.940 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.940 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.940 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.940 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.941 07:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.199 00:15:58.199 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.199 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.199 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.459 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.459 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.459 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.459 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.459 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.460 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.460 { 00:15:58.460 "cntlid": 25, 00:15:58.460 "qid": 0, 00:15:58.460 "state": "enabled", 00:15:58.460 "thread": "nvmf_tgt_poll_group_000", 00:15:58.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:58.460 "listen_address": { 00:15:58.460 "trtype": "TCP", 00:15:58.460 "adrfam": "IPv4", 00:15:58.460 "traddr": "10.0.0.2", 00:15:58.460 "trsvcid": "4420" 00:15:58.460 }, 00:15:58.460 "peer_address": { 00:15:58.460 "trtype": "TCP", 00:15:58.460 "adrfam": "IPv4", 00:15:58.460 "traddr": "10.0.0.1", 00:15:58.460 "trsvcid": "36340" 00:15:58.460 }, 00:15:58.460 "auth": { 00:15:58.460 "state": "completed", 00:15:58.460 "digest": "sha256", 00:15:58.460 "dhgroup": "ffdhe4096" 00:15:58.460 } 00:15:58.460 } 00:15:58.460 ]' 00:15:58.460 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.460 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.460 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.460 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:58.460 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.460 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.460 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.460 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.720 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:15:58.720 07:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:15:59.289 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.289 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.289 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.289 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.289 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.289 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.290 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:59.290 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:59.549 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:59.549 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.550 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:59.550 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:59.550 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:59.550 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.550 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.550 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.550 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.550 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.550 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.550 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.550 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.809 00:15:59.809 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.809 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.809 07:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.069 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.069 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.069 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.069 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.069 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.069 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.069 { 00:16:00.069 "cntlid": 27, 00:16:00.069 "qid": 0, 00:16:00.069 "state": "enabled", 00:16:00.069 "thread": "nvmf_tgt_poll_group_000", 00:16:00.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:00.069 "listen_address": { 00:16:00.069 "trtype": "TCP", 00:16:00.069 "adrfam": "IPv4", 00:16:00.069 "traddr": "10.0.0.2", 00:16:00.069 "trsvcid": "4420" 00:16:00.069 }, 00:16:00.069 "peer_address": { 00:16:00.069 "trtype": "TCP", 00:16:00.069 "adrfam": "IPv4", 00:16:00.069 "traddr": "10.0.0.1", 00:16:00.069 "trsvcid": "36360" 00:16:00.069 }, 00:16:00.069 "auth": { 00:16:00.069 "state": "completed", 00:16:00.069 "digest": "sha256", 00:16:00.069 "dhgroup": "ffdhe4096" 00:16:00.069 } 00:16:00.069 } 00:16:00.069 ]' 00:16:00.069 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.069 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.069 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.069 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:00.069 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.069 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.069 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.069 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.329 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:00.329 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:00.897 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.897 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:00.897 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.897 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.897 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.897 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.897 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:00.897 07:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:01.156 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:01.156 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.156 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:01.156 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:01.156 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:01.156 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.156 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.156 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.156 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.156 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.156 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.157 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.157 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.416 00:16:01.416 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.416 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.416 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.676 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.676 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.676 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.676 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.676 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.676 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.676 { 00:16:01.676 "cntlid": 29, 00:16:01.676 "qid": 0, 00:16:01.676 "state": "enabled", 00:16:01.676 "thread": "nvmf_tgt_poll_group_000", 00:16:01.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:01.676 "listen_address": { 00:16:01.676 "trtype": "TCP", 00:16:01.676 "adrfam": "IPv4", 00:16:01.676 "traddr": "10.0.0.2", 00:16:01.676 "trsvcid": "4420" 00:16:01.676 }, 00:16:01.676 "peer_address": { 00:16:01.676 "trtype": "TCP", 00:16:01.676 "adrfam": "IPv4", 00:16:01.676 "traddr": "10.0.0.1", 00:16:01.676 "trsvcid": "43732" 00:16:01.676 }, 00:16:01.676 "auth": { 00:16:01.676 "state": "completed", 00:16:01.676 "digest": "sha256", 00:16:01.676 "dhgroup": "ffdhe4096" 00:16:01.676 } 00:16:01.676 } 00:16:01.676 ]' 00:16:01.676 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.676 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.676 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.676 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:01.676 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.676 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.676 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.676 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.936 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:16:01.936 07:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:16:02.504 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.504 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:02.504 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.504 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.504 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.504 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.504 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:02.504 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:02.764 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:02.764 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.764 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:02.764 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:02.764 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:02.764 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.764 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:02.764 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.764 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.764 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.764 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:02.764 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.764 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.024 00:16:03.024 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.024 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.024 07:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.284 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.284 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.284 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.284 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.284 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.284 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.284 { 00:16:03.284 "cntlid": 31, 00:16:03.284 "qid": 0, 00:16:03.284 "state": "enabled", 00:16:03.284 "thread": "nvmf_tgt_poll_group_000", 00:16:03.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:03.284 "listen_address": { 00:16:03.284 "trtype": "TCP", 00:16:03.284 "adrfam": "IPv4", 00:16:03.284 "traddr": "10.0.0.2", 00:16:03.284 "trsvcid": "4420" 00:16:03.284 }, 00:16:03.284 "peer_address": { 00:16:03.284 "trtype": "TCP", 00:16:03.284 "adrfam": "IPv4", 00:16:03.284 "traddr": "10.0.0.1", 00:16:03.284 "trsvcid": "43760" 00:16:03.284 }, 00:16:03.284 "auth": { 00:16:03.284 "state": "completed", 00:16:03.284 "digest": "sha256", 00:16:03.284 "dhgroup": "ffdhe4096" 00:16:03.284 } 00:16:03.284 } 00:16:03.284 ]' 00:16:03.284 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.284 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.284 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.284 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:03.284 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.284 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.284 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.284 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.544 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:16:03.544 07:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:16:04.112 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.112 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.112 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.112 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.112 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.112 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.112 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.112 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:04.112 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:04.371 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:04.372 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.372 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:04.372 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:04.372 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:04.372 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.372 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.372 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.372 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.372 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.372 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.372 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.372 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.631 00:16:04.631 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.631 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.631 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.890 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.890 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.890 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.890 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.890 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.890 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.890 { 00:16:04.890 "cntlid": 33, 00:16:04.890 "qid": 0, 00:16:04.890 "state": "enabled", 00:16:04.890 "thread": "nvmf_tgt_poll_group_000", 00:16:04.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:04.890 "listen_address": { 00:16:04.890 "trtype": "TCP", 00:16:04.890 "adrfam": "IPv4", 00:16:04.890 "traddr": "10.0.0.2", 00:16:04.890 "trsvcid": "4420" 00:16:04.890 }, 00:16:04.890 "peer_address": { 00:16:04.890 "trtype": "TCP", 00:16:04.890 "adrfam": "IPv4", 00:16:04.890 "traddr": "10.0.0.1", 00:16:04.890 "trsvcid": "43770" 00:16:04.890 }, 00:16:04.890 "auth": { 00:16:04.890 "state": "completed", 00:16:04.890 "digest": "sha256", 00:16:04.890 "dhgroup": "ffdhe6144" 00:16:04.890 } 00:16:04.890 } 00:16:04.890 ]' 00:16:04.890 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.890 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.890 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.890 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:04.890 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.890 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.890 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.890 07:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.149 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:16:05.149 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:16:05.718 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.718 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.718 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.718 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.718 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.718 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.718 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:05.718 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:05.978 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:05.978 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.978 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:05.978 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:05.978 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:05.978 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.978 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.978 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.978 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.978 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.978 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.978 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.978 07:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.237 00:16:06.237 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.237 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.237 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.497 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.497 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.497 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.497 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.497 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.497 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.497 { 00:16:06.497 "cntlid": 35, 00:16:06.497 "qid": 0, 00:16:06.497 "state": "enabled", 00:16:06.497 "thread": "nvmf_tgt_poll_group_000", 00:16:06.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:06.497 "listen_address": { 00:16:06.497 "trtype": "TCP", 00:16:06.497 "adrfam": "IPv4", 00:16:06.497 "traddr": "10.0.0.2", 00:16:06.497 "trsvcid": "4420" 00:16:06.497 }, 00:16:06.497 "peer_address": { 00:16:06.497 "trtype": "TCP", 00:16:06.497 "adrfam": "IPv4", 00:16:06.497 "traddr": "10.0.0.1", 00:16:06.497 "trsvcid": "43792" 00:16:06.497 }, 00:16:06.498 "auth": { 00:16:06.498 "state": "completed", 00:16:06.498 "digest": "sha256", 00:16:06.498 "dhgroup": "ffdhe6144" 00:16:06.498 } 00:16:06.498 } 00:16:06.498 ]' 00:16:06.498 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.498 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.498 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.498 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:06.498 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.498 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.498 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.498 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.757 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:06.757 07:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:07.327 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.327 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:07.327 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.327 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.327 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.327 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.327 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:07.327 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:07.587 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:07.587 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.587 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:07.587 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:07.587 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:07.587 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.587 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.587 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.587 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.587 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.587 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.587 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.587 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.846 00:16:08.106 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.106 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.106 07:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.106 07:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.106 07:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.106 07:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.106 07:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.107 07:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.107 07:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.107 { 00:16:08.107 "cntlid": 37, 00:16:08.107 "qid": 0, 00:16:08.107 "state": "enabled", 00:16:08.107 "thread": "nvmf_tgt_poll_group_000", 00:16:08.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:08.107 "listen_address": { 00:16:08.107 "trtype": "TCP", 00:16:08.107 "adrfam": "IPv4", 00:16:08.107 "traddr": "10.0.0.2", 00:16:08.107 "trsvcid": "4420" 00:16:08.107 }, 00:16:08.107 "peer_address": { 00:16:08.107 "trtype": "TCP", 00:16:08.107 "adrfam": "IPv4", 00:16:08.107 "traddr": "10.0.0.1", 00:16:08.107 "trsvcid": "43816" 00:16:08.107 }, 00:16:08.107 "auth": { 00:16:08.107 "state": "completed", 00:16:08.107 "digest": "sha256", 00:16:08.107 "dhgroup": "ffdhe6144" 00:16:08.107 } 00:16:08.107 } 00:16:08.107 ]' 00:16:08.107 07:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.107 07:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.107 07:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.366 07:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:08.366 07:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.366 07:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.366 07:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.366 07:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.366 07:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:16:08.366 07:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:16:08.935 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.195 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.765 00:16:09.765 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.765 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.765 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.765 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.765 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.765 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.765 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.765 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.765 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.765 { 00:16:09.765 "cntlid": 39, 00:16:09.765 "qid": 0, 00:16:09.765 "state": "enabled", 00:16:09.765 "thread": "nvmf_tgt_poll_group_000", 00:16:09.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:09.765 "listen_address": { 00:16:09.765 "trtype": "TCP", 00:16:09.765 "adrfam": "IPv4", 00:16:09.765 "traddr": "10.0.0.2", 00:16:09.765 "trsvcid": "4420" 00:16:09.765 }, 00:16:09.765 "peer_address": { 00:16:09.765 "trtype": "TCP", 00:16:09.765 "adrfam": "IPv4", 00:16:09.765 "traddr": "10.0.0.1", 00:16:09.765 "trsvcid": "43846" 00:16:09.765 }, 00:16:09.765 "auth": { 00:16:09.765 "state": "completed", 00:16:09.765 "digest": "sha256", 00:16:09.765 "dhgroup": "ffdhe6144" 00:16:09.765 } 00:16:09.765 } 00:16:09.765 ]' 00:16:09.765 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.024 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.024 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.024 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.024 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.024 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.024 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.024 07:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.283 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:16:10.283 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:16:10.851 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.851 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:10.851 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.851 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.851 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.851 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.852 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.852 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:10.852 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:10.852 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:10.852 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.852 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.852 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:10.852 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:10.852 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.852 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.852 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.852 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.852 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.852 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.852 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.852 07:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.417 00:16:11.417 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.417 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.417 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.675 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.675 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.675 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.675 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.675 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.675 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.675 { 00:16:11.675 "cntlid": 41, 00:16:11.675 "qid": 0, 00:16:11.675 "state": "enabled", 00:16:11.675 "thread": "nvmf_tgt_poll_group_000", 00:16:11.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:11.675 "listen_address": { 00:16:11.675 "trtype": "TCP", 00:16:11.675 "adrfam": "IPv4", 00:16:11.675 "traddr": "10.0.0.2", 00:16:11.675 "trsvcid": "4420" 00:16:11.675 }, 00:16:11.675 "peer_address": { 00:16:11.675 "trtype": "TCP", 00:16:11.675 "adrfam": "IPv4", 00:16:11.675 "traddr": "10.0.0.1", 00:16:11.675 "trsvcid": "42952" 00:16:11.675 }, 00:16:11.675 "auth": { 00:16:11.675 "state": "completed", 00:16:11.675 "digest": "sha256", 00:16:11.675 "dhgroup": "ffdhe8192" 00:16:11.675 } 00:16:11.675 } 00:16:11.675 ]' 00:16:11.675 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.675 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.675 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.675 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:11.675 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.675 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.675 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.675 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.932 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:16:11.932 07:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:16:12.499 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.499 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.499 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.499 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.499 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.499 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.499 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:12.499 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:12.758 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:12.758 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.758 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.758 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:12.758 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:12.758 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.758 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.758 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.758 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.758 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.758 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.758 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.758 07:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.326 00:16:13.327 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.327 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.327 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.327 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.327 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.327 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.327 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.327 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.327 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.327 { 00:16:13.327 "cntlid": 43, 00:16:13.327 "qid": 0, 00:16:13.327 "state": "enabled", 00:16:13.327 "thread": "nvmf_tgt_poll_group_000", 00:16:13.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:13.327 "listen_address": { 00:16:13.327 "trtype": "TCP", 00:16:13.327 "adrfam": "IPv4", 00:16:13.327 "traddr": "10.0.0.2", 00:16:13.327 "trsvcid": "4420" 00:16:13.327 }, 00:16:13.327 "peer_address": { 00:16:13.327 "trtype": "TCP", 00:16:13.327 "adrfam": "IPv4", 00:16:13.327 "traddr": "10.0.0.1", 00:16:13.327 "trsvcid": "42976" 00:16:13.327 }, 00:16:13.327 "auth": { 00:16:13.327 "state": "completed", 00:16:13.327 "digest": "sha256", 00:16:13.327 "dhgroup": "ffdhe8192" 00:16:13.327 } 00:16:13.327 } 00:16:13.327 ]' 00:16:13.327 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.585 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.585 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.585 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:13.585 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.585 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.585 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.585 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.844 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:13.844 07:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:14.411 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.411 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.411 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.411 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.411 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.411 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.411 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:14.412 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:14.671 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:14.671 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.671 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:14.671 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:14.671 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:14.671 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.671 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.671 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.671 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.671 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.671 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.671 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.671 07:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.940 00:16:15.200 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.200 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.200 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.200 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.200 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.200 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.200 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.200 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.200 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.200 { 00:16:15.200 "cntlid": 45, 00:16:15.200 "qid": 0, 00:16:15.200 "state": "enabled", 00:16:15.200 "thread": "nvmf_tgt_poll_group_000", 00:16:15.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:15.200 "listen_address": { 00:16:15.200 "trtype": "TCP", 00:16:15.200 "adrfam": "IPv4", 00:16:15.200 "traddr": "10.0.0.2", 00:16:15.200 "trsvcid": "4420" 00:16:15.200 }, 00:16:15.200 "peer_address": { 00:16:15.200 "trtype": "TCP", 00:16:15.200 "adrfam": "IPv4", 00:16:15.200 "traddr": "10.0.0.1", 00:16:15.200 "trsvcid": "43016" 00:16:15.200 }, 00:16:15.200 "auth": { 00:16:15.200 "state": "completed", 00:16:15.200 "digest": "sha256", 00:16:15.200 "dhgroup": "ffdhe8192" 00:16:15.200 } 00:16:15.200 } 00:16:15.200 ]' 00:16:15.200 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.200 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.200 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.458 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:15.458 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.458 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.458 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.458 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.717 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:16:15.717 07:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.285 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.852 00:16:16.852 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.852 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.852 07:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.110 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.110 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.110 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.110 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.110 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.110 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.110 { 00:16:17.110 "cntlid": 47, 00:16:17.110 "qid": 0, 00:16:17.110 "state": "enabled", 00:16:17.110 "thread": "nvmf_tgt_poll_group_000", 00:16:17.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.110 "listen_address": { 00:16:17.110 "trtype": "TCP", 00:16:17.110 "adrfam": "IPv4", 00:16:17.110 "traddr": "10.0.0.2", 00:16:17.110 "trsvcid": "4420" 00:16:17.110 }, 00:16:17.110 "peer_address": { 00:16:17.110 "trtype": "TCP", 00:16:17.110 "adrfam": "IPv4", 00:16:17.110 "traddr": "10.0.0.1", 00:16:17.110 "trsvcid": "43042" 00:16:17.110 }, 00:16:17.110 "auth": { 00:16:17.110 "state": "completed", 00:16:17.110 "digest": "sha256", 00:16:17.110 "dhgroup": "ffdhe8192" 00:16:17.110 } 00:16:17.110 } 00:16:17.110 ]' 00:16:17.110 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.111 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.111 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.111 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:17.111 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.111 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.111 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.111 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.369 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:16:17.369 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:16:17.938 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.938 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.938 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.938 07:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.938 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.938 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:17.938 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.938 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.938 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:17.938 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:18.197 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:18.197 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.197 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.197 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:18.197 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:18.197 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.197 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.197 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.197 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.197 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.197 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.197 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.197 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.456 00:16:18.456 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.456 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.456 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.715 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.715 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.715 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.715 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.715 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.715 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.715 { 00:16:18.715 "cntlid": 49, 00:16:18.715 "qid": 0, 00:16:18.715 "state": "enabled", 00:16:18.715 "thread": "nvmf_tgt_poll_group_000", 00:16:18.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:18.715 "listen_address": { 00:16:18.715 "trtype": "TCP", 00:16:18.715 "adrfam": "IPv4", 00:16:18.715 "traddr": "10.0.0.2", 00:16:18.715 "trsvcid": "4420" 00:16:18.715 }, 00:16:18.715 "peer_address": { 00:16:18.715 "trtype": "TCP", 00:16:18.715 "adrfam": "IPv4", 00:16:18.715 "traddr": "10.0.0.1", 00:16:18.715 "trsvcid": "43070" 00:16:18.715 }, 00:16:18.715 "auth": { 00:16:18.715 "state": "completed", 00:16:18.715 "digest": "sha384", 00:16:18.715 "dhgroup": "null" 00:16:18.715 } 00:16:18.715 } 00:16:18.715 ]' 00:16:18.715 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.715 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.715 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.715 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:18.715 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.715 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.715 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.715 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.975 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:16:18.975 07:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:16:19.543 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.543 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.543 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.543 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.543 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.543 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.543 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:19.543 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:19.802 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:19.802 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.802 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.802 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:19.802 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:19.802 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.802 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.802 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.802 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.802 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.802 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.802 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.802 07:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.061 00:16:20.061 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.061 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.061 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.319 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.319 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.319 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.319 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.319 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.319 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.319 { 00:16:20.319 "cntlid": 51, 00:16:20.319 "qid": 0, 00:16:20.319 "state": "enabled", 00:16:20.319 "thread": "nvmf_tgt_poll_group_000", 00:16:20.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:20.319 "listen_address": { 00:16:20.319 "trtype": "TCP", 00:16:20.319 "adrfam": "IPv4", 00:16:20.319 "traddr": "10.0.0.2", 00:16:20.319 "trsvcid": "4420" 00:16:20.319 }, 00:16:20.319 "peer_address": { 00:16:20.320 "trtype": "TCP", 00:16:20.320 "adrfam": "IPv4", 00:16:20.320 "traddr": "10.0.0.1", 00:16:20.320 "trsvcid": "56110" 00:16:20.320 }, 00:16:20.320 "auth": { 00:16:20.320 "state": "completed", 00:16:20.320 "digest": "sha384", 00:16:20.320 "dhgroup": "null" 00:16:20.320 } 00:16:20.320 } 00:16:20.320 ]' 00:16:20.320 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.320 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.320 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.320 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:20.320 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.320 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.320 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.320 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.578 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:20.578 07:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:21.145 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.145 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.145 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.145 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.145 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.145 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.145 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:21.145 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:21.404 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:21.404 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.404 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.404 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:21.404 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:21.404 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.404 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.404 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.404 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.404 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.404 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.404 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.404 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.662 00:16:21.662 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.662 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.662 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.662 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.939 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.939 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.939 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.939 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.939 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.939 { 00:16:21.939 "cntlid": 53, 00:16:21.939 "qid": 0, 00:16:21.939 "state": "enabled", 00:16:21.939 "thread": "nvmf_tgt_poll_group_000", 00:16:21.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:21.939 "listen_address": { 00:16:21.939 "trtype": "TCP", 00:16:21.939 "adrfam": "IPv4", 00:16:21.939 "traddr": "10.0.0.2", 00:16:21.939 "trsvcid": "4420" 00:16:21.939 }, 00:16:21.939 "peer_address": { 00:16:21.939 "trtype": "TCP", 00:16:21.939 "adrfam": "IPv4", 00:16:21.939 "traddr": "10.0.0.1", 00:16:21.939 "trsvcid": "56128" 00:16:21.939 }, 00:16:21.939 "auth": { 00:16:21.939 "state": "completed", 00:16:21.939 "digest": "sha384", 00:16:21.939 "dhgroup": "null" 00:16:21.939 } 00:16:21.939 } 00:16:21.939 ]' 00:16:21.939 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.939 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.939 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.939 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:21.939 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.939 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.939 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.939 07:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.196 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:16:22.196 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:16:22.762 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.762 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.762 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.762 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.762 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.762 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.762 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:22.762 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:23.020 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:23.020 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.020 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:23.020 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:23.020 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:23.020 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.020 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:23.020 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.020 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.020 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.020 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:23.020 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.021 07:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.280 00:16:23.280 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.280 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.280 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.280 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.280 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.280 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.280 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.280 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.280 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.280 { 00:16:23.280 "cntlid": 55, 00:16:23.280 "qid": 0, 00:16:23.280 "state": "enabled", 00:16:23.280 "thread": "nvmf_tgt_poll_group_000", 00:16:23.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:23.280 "listen_address": { 00:16:23.280 "trtype": "TCP", 00:16:23.280 "adrfam": "IPv4", 00:16:23.280 "traddr": "10.0.0.2", 00:16:23.280 "trsvcid": "4420" 00:16:23.280 }, 00:16:23.280 "peer_address": { 00:16:23.280 "trtype": "TCP", 00:16:23.280 "adrfam": "IPv4", 00:16:23.280 "traddr": "10.0.0.1", 00:16:23.280 "trsvcid": "56148" 00:16:23.280 }, 00:16:23.280 "auth": { 00:16:23.280 "state": "completed", 00:16:23.280 "digest": "sha384", 00:16:23.280 "dhgroup": "null" 00:16:23.280 } 00:16:23.280 } 00:16:23.280 ]' 00:16:23.280 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.538 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.538 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.538 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:23.538 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.539 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.539 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.539 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.796 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:16:23.796 07:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.363 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.621 00:16:24.891 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.891 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.891 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.891 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.891 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.891 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.891 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.891 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.891 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.891 { 00:16:24.891 "cntlid": 57, 00:16:24.891 "qid": 0, 00:16:24.891 "state": "enabled", 00:16:24.891 "thread": "nvmf_tgt_poll_group_000", 00:16:24.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:24.891 "listen_address": { 00:16:24.891 "trtype": "TCP", 00:16:24.891 "adrfam": "IPv4", 00:16:24.891 "traddr": "10.0.0.2", 00:16:24.891 "trsvcid": "4420" 00:16:24.891 }, 00:16:24.891 "peer_address": { 00:16:24.891 "trtype": "TCP", 00:16:24.891 "adrfam": "IPv4", 00:16:24.891 "traddr": "10.0.0.1", 00:16:24.891 "trsvcid": "56160" 00:16:24.891 }, 00:16:24.891 "auth": { 00:16:24.891 "state": "completed", 00:16:24.891 "digest": "sha384", 00:16:24.891 "dhgroup": "ffdhe2048" 00:16:24.891 } 00:16:24.891 } 00:16:24.891 ]' 00:16:24.891 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.891 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.891 07:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.173 07:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.173 07:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.173 07:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.173 07:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.173 07:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.173 07:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:16:25.173 07:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:16:25.747 07:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.747 07:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:25.747 07:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.747 07:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.747 07:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.747 07:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.747 07:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:25.747 07:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:26.007 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:26.007 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.007 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:26.007 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:26.007 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:26.007 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.007 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.007 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.007 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.007 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.007 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.007 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.007 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.266 00:16:26.266 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.266 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.266 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.525 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.525 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.525 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.525 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.525 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.525 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.525 { 00:16:26.525 "cntlid": 59, 00:16:26.525 "qid": 0, 00:16:26.525 "state": "enabled", 00:16:26.525 "thread": "nvmf_tgt_poll_group_000", 00:16:26.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:26.525 "listen_address": { 00:16:26.525 "trtype": "TCP", 00:16:26.525 "adrfam": "IPv4", 00:16:26.525 "traddr": "10.0.0.2", 00:16:26.525 "trsvcid": "4420" 00:16:26.525 }, 00:16:26.525 "peer_address": { 00:16:26.525 "trtype": "TCP", 00:16:26.525 "adrfam": "IPv4", 00:16:26.525 "traddr": "10.0.0.1", 00:16:26.525 "trsvcid": "56192" 00:16:26.525 }, 00:16:26.525 "auth": { 00:16:26.525 "state": "completed", 00:16:26.525 "digest": "sha384", 00:16:26.525 "dhgroup": "ffdhe2048" 00:16:26.525 } 00:16:26.525 } 00:16:26.525 ]' 00:16:26.525 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.525 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.525 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.525 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:26.525 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.794 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.794 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.794 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.794 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:26.794 07:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:27.363 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.363 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.363 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.363 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.363 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.363 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.363 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:27.363 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:27.621 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:27.621 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.621 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.621 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:27.621 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.621 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.621 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.621 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.621 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.621 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.621 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.621 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.621 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.904 00:16:27.904 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.904 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.904 07:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.172 07:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.172 07:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.172 07:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.172 07:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.172 07:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.172 07:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.172 { 00:16:28.172 "cntlid": 61, 00:16:28.172 "qid": 0, 00:16:28.172 "state": "enabled", 00:16:28.172 "thread": "nvmf_tgt_poll_group_000", 00:16:28.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:28.172 "listen_address": { 00:16:28.172 "trtype": "TCP", 00:16:28.172 "adrfam": "IPv4", 00:16:28.172 "traddr": "10.0.0.2", 00:16:28.172 "trsvcid": "4420" 00:16:28.172 }, 00:16:28.172 "peer_address": { 00:16:28.172 "trtype": "TCP", 00:16:28.172 "adrfam": "IPv4", 00:16:28.172 "traddr": "10.0.0.1", 00:16:28.172 "trsvcid": "56208" 00:16:28.172 }, 00:16:28.172 "auth": { 00:16:28.172 "state": "completed", 00:16:28.172 "digest": "sha384", 00:16:28.172 "dhgroup": "ffdhe2048" 00:16:28.172 } 00:16:28.172 } 00:16:28.172 ]' 00:16:28.172 07:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.172 07:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.172 07:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.172 07:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:28.172 07:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.172 07:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.172 07:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.172 07:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.431 07:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:16:28.431 07:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:16:28.999 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.999 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.999 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.999 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.999 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.999 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.999 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:28.999 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:29.257 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:29.257 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.257 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:29.257 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.257 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:29.257 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.258 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:29.258 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.258 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.258 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.258 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:29.258 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.258 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.516 00:16:29.516 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.516 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.517 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.775 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.775 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.775 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.775 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.775 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.775 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.775 { 00:16:29.775 "cntlid": 63, 00:16:29.775 "qid": 0, 00:16:29.775 "state": "enabled", 00:16:29.775 "thread": "nvmf_tgt_poll_group_000", 00:16:29.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:29.775 "listen_address": { 00:16:29.775 "trtype": "TCP", 00:16:29.775 "adrfam": "IPv4", 00:16:29.775 "traddr": "10.0.0.2", 00:16:29.775 "trsvcid": "4420" 00:16:29.775 }, 00:16:29.775 "peer_address": { 00:16:29.775 "trtype": "TCP", 00:16:29.775 "adrfam": "IPv4", 00:16:29.775 "traddr": "10.0.0.1", 00:16:29.775 "trsvcid": "56244" 00:16:29.775 }, 00:16:29.775 "auth": { 00:16:29.775 "state": "completed", 00:16:29.775 "digest": "sha384", 00:16:29.775 "dhgroup": "ffdhe2048" 00:16:29.775 } 00:16:29.775 } 00:16:29.775 ]' 00:16:29.775 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.775 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.775 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.775 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.775 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.775 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.775 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.775 07:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.034 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:16:30.034 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:16:30.603 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.603 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.603 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.603 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.603 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.603 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.603 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.603 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:30.603 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:30.862 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:30.862 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.862 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:30.862 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:30.862 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:30.862 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.862 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.862 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.862 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.862 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.862 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.862 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.862 07:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.121 00:16:31.121 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.121 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.121 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.379 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.379 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.379 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.379 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.379 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.380 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.380 { 00:16:31.380 "cntlid": 65, 00:16:31.380 "qid": 0, 00:16:31.380 "state": "enabled", 00:16:31.380 "thread": "nvmf_tgt_poll_group_000", 00:16:31.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:31.380 "listen_address": { 00:16:31.380 "trtype": "TCP", 00:16:31.380 "adrfam": "IPv4", 00:16:31.380 "traddr": "10.0.0.2", 00:16:31.380 "trsvcid": "4420" 00:16:31.380 }, 00:16:31.380 "peer_address": { 00:16:31.380 "trtype": "TCP", 00:16:31.380 "adrfam": "IPv4", 00:16:31.380 "traddr": "10.0.0.1", 00:16:31.380 "trsvcid": "55690" 00:16:31.380 }, 00:16:31.380 "auth": { 00:16:31.380 "state": "completed", 00:16:31.380 "digest": "sha384", 00:16:31.380 "dhgroup": "ffdhe3072" 00:16:31.380 } 00:16:31.380 } 00:16:31.380 ]' 00:16:31.380 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.380 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.380 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.380 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:31.380 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.380 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.380 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.380 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.638 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:16:31.638 07:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:16:32.210 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.210 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.210 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.210 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:32.210 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:32.469 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:32.469 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.469 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:32.469 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:32.469 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:32.469 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.469 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.469 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.469 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.469 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.469 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.469 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.469 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.727 00:16:32.727 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.727 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.727 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.986 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.986 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.986 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.986 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.986 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.986 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.986 { 00:16:32.986 "cntlid": 67, 00:16:32.986 "qid": 0, 00:16:32.986 "state": "enabled", 00:16:32.986 "thread": "nvmf_tgt_poll_group_000", 00:16:32.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:32.986 "listen_address": { 00:16:32.986 "trtype": "TCP", 00:16:32.986 "adrfam": "IPv4", 00:16:32.986 "traddr": "10.0.0.2", 00:16:32.986 "trsvcid": "4420" 00:16:32.986 }, 00:16:32.986 "peer_address": { 00:16:32.986 "trtype": "TCP", 00:16:32.986 "adrfam": "IPv4", 00:16:32.986 "traddr": "10.0.0.1", 00:16:32.986 "trsvcid": "55718" 00:16:32.986 }, 00:16:32.986 "auth": { 00:16:32.986 "state": "completed", 00:16:32.986 "digest": "sha384", 00:16:32.986 "dhgroup": "ffdhe3072" 00:16:32.986 } 00:16:32.986 } 00:16:32.986 ]' 00:16:32.986 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.986 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.986 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.986 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.986 07:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.986 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.986 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.986 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.245 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:33.245 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:33.811 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.811 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.811 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.811 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.811 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.811 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.811 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:33.811 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:34.070 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:34.070 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.070 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:34.070 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:34.070 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:34.070 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.070 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.070 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.070 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.070 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.070 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.070 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.070 07:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.329 00:16:34.329 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.329 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.329 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.588 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.588 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.588 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.588 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.588 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.588 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.588 { 00:16:34.588 "cntlid": 69, 00:16:34.588 "qid": 0, 00:16:34.588 "state": "enabled", 00:16:34.588 "thread": "nvmf_tgt_poll_group_000", 00:16:34.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:34.588 "listen_address": { 00:16:34.588 "trtype": "TCP", 00:16:34.588 "adrfam": "IPv4", 00:16:34.588 "traddr": "10.0.0.2", 00:16:34.588 "trsvcid": "4420" 00:16:34.588 }, 00:16:34.588 "peer_address": { 00:16:34.588 "trtype": "TCP", 00:16:34.588 "adrfam": "IPv4", 00:16:34.588 "traddr": "10.0.0.1", 00:16:34.588 "trsvcid": "55744" 00:16:34.588 }, 00:16:34.588 "auth": { 00:16:34.588 "state": "completed", 00:16:34.588 "digest": "sha384", 00:16:34.588 "dhgroup": "ffdhe3072" 00:16:34.588 } 00:16:34.588 } 00:16:34.588 ]' 00:16:34.588 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.588 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.588 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.588 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.588 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.588 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.588 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.588 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.847 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:16:34.847 07:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:16:35.467 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.467 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.467 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.467 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.467 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.467 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.467 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:35.467 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:35.749 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:35.749 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.749 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:35.749 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.749 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:35.749 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.749 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:35.749 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.749 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.749 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.749 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:35.749 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.749 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.749 00:16:35.749 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.749 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.749 07:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.047 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.047 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.047 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.047 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.047 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.047 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.047 { 00:16:36.047 "cntlid": 71, 00:16:36.047 "qid": 0, 00:16:36.047 "state": "enabled", 00:16:36.047 "thread": "nvmf_tgt_poll_group_000", 00:16:36.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:36.047 "listen_address": { 00:16:36.047 "trtype": "TCP", 00:16:36.047 "adrfam": "IPv4", 00:16:36.047 "traddr": "10.0.0.2", 00:16:36.047 "trsvcid": "4420" 00:16:36.047 }, 00:16:36.047 "peer_address": { 00:16:36.047 "trtype": "TCP", 00:16:36.047 "adrfam": "IPv4", 00:16:36.047 "traddr": "10.0.0.1", 00:16:36.047 "trsvcid": "55756" 00:16:36.047 }, 00:16:36.047 "auth": { 00:16:36.047 "state": "completed", 00:16:36.047 "digest": "sha384", 00:16:36.047 "dhgroup": "ffdhe3072" 00:16:36.047 } 00:16:36.047 } 00:16:36.047 ]' 00:16:36.047 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.047 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.047 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.047 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.047 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.326 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.326 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.326 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.326 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:16:36.326 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:16:36.894 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.895 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.895 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.895 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.895 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.895 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.895 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.895 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:36.895 07:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:37.154 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:37.154 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.154 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:37.154 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:37.154 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:37.154 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.154 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.154 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.154 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.154 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.154 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.154 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.154 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.413 00:16:37.413 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.413 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.413 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.672 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.673 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.673 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.673 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.673 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.673 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.673 { 00:16:37.673 "cntlid": 73, 00:16:37.673 "qid": 0, 00:16:37.673 "state": "enabled", 00:16:37.673 "thread": "nvmf_tgt_poll_group_000", 00:16:37.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:37.673 "listen_address": { 00:16:37.673 "trtype": "TCP", 00:16:37.673 "adrfam": "IPv4", 00:16:37.673 "traddr": "10.0.0.2", 00:16:37.673 "trsvcid": "4420" 00:16:37.673 }, 00:16:37.673 "peer_address": { 00:16:37.673 "trtype": "TCP", 00:16:37.673 "adrfam": "IPv4", 00:16:37.673 "traddr": "10.0.0.1", 00:16:37.673 "trsvcid": "55770" 00:16:37.673 }, 00:16:37.673 "auth": { 00:16:37.673 "state": "completed", 00:16:37.673 "digest": "sha384", 00:16:37.673 "dhgroup": "ffdhe4096" 00:16:37.673 } 00:16:37.673 } 00:16:37.673 ]' 00:16:37.673 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.673 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.673 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.673 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:37.673 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.673 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.673 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.673 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.932 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:16:37.932 07:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:16:38.500 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.500 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.500 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.500 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.500 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.500 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.500 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:38.500 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:38.759 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:38.759 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.759 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:38.759 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:38.759 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:38.759 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.759 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.759 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.759 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.759 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.759 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.759 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.759 07:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.018 00:16:39.018 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.018 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.018 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.277 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.277 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.277 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.277 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.277 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.277 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.277 { 00:16:39.277 "cntlid": 75, 00:16:39.277 "qid": 0, 00:16:39.277 "state": "enabled", 00:16:39.277 "thread": "nvmf_tgt_poll_group_000", 00:16:39.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:39.277 "listen_address": { 00:16:39.277 "trtype": "TCP", 00:16:39.277 "adrfam": "IPv4", 00:16:39.277 "traddr": "10.0.0.2", 00:16:39.277 "trsvcid": "4420" 00:16:39.277 }, 00:16:39.277 "peer_address": { 00:16:39.277 "trtype": "TCP", 00:16:39.277 "adrfam": "IPv4", 00:16:39.277 "traddr": "10.0.0.1", 00:16:39.277 "trsvcid": "55816" 00:16:39.277 }, 00:16:39.277 "auth": { 00:16:39.277 "state": "completed", 00:16:39.277 "digest": "sha384", 00:16:39.277 "dhgroup": "ffdhe4096" 00:16:39.277 } 00:16:39.277 } 00:16:39.277 ]' 00:16:39.277 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.277 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.277 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.277 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:39.277 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.536 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.536 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.536 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.536 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:39.536 07:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:40.103 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.103 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.103 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.103 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.103 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.103 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.103 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:40.104 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:40.363 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:40.363 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.363 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:40.363 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:40.363 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:40.363 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.363 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.363 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.363 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.363 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.363 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.363 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.363 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.622 00:16:40.622 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.622 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.622 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.882 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.882 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.882 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.882 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.882 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.882 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.882 { 00:16:40.882 "cntlid": 77, 00:16:40.882 "qid": 0, 00:16:40.882 "state": "enabled", 00:16:40.882 "thread": "nvmf_tgt_poll_group_000", 00:16:40.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:40.882 "listen_address": { 00:16:40.882 "trtype": "TCP", 00:16:40.882 "adrfam": "IPv4", 00:16:40.882 "traddr": "10.0.0.2", 00:16:40.882 "trsvcid": "4420" 00:16:40.882 }, 00:16:40.882 "peer_address": { 00:16:40.882 "trtype": "TCP", 00:16:40.882 "adrfam": "IPv4", 00:16:40.882 "traddr": "10.0.0.1", 00:16:40.882 "trsvcid": "48674" 00:16:40.882 }, 00:16:40.882 "auth": { 00:16:40.882 "state": "completed", 00:16:40.882 "digest": "sha384", 00:16:40.882 "dhgroup": "ffdhe4096" 00:16:40.882 } 00:16:40.882 } 00:16:40.882 ]' 00:16:40.882 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.882 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.882 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.882 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.882 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.141 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.141 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.141 07:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.141 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:16:41.141 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:16:41.709 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.709 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.709 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.709 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.709 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.709 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.709 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:41.709 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:41.968 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:41.968 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.968 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:41.968 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:41.968 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:41.968 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.968 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:41.968 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.968 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.968 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.968 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:41.968 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.968 07:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.227 00:16:42.227 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.227 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.227 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.485 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.485 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.485 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.485 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.485 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.485 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.485 { 00:16:42.485 "cntlid": 79, 00:16:42.485 "qid": 0, 00:16:42.485 "state": "enabled", 00:16:42.485 "thread": "nvmf_tgt_poll_group_000", 00:16:42.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:42.485 "listen_address": { 00:16:42.485 "trtype": "TCP", 00:16:42.485 "adrfam": "IPv4", 00:16:42.485 "traddr": "10.0.0.2", 00:16:42.485 "trsvcid": "4420" 00:16:42.485 }, 00:16:42.485 "peer_address": { 00:16:42.485 "trtype": "TCP", 00:16:42.485 "adrfam": "IPv4", 00:16:42.485 "traddr": "10.0.0.1", 00:16:42.485 "trsvcid": "48688" 00:16:42.485 }, 00:16:42.485 "auth": { 00:16:42.485 "state": "completed", 00:16:42.485 "digest": "sha384", 00:16:42.485 "dhgroup": "ffdhe4096" 00:16:42.485 } 00:16:42.485 } 00:16:42.485 ]' 00:16:42.485 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.485 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.485 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.486 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.486 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.744 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.744 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.744 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.744 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:16:42.744 07:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:16:43.311 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.311 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.311 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.312 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.312 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.312 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.312 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.312 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:43.312 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:43.571 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:43.571 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.571 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:43.571 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:43.571 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.571 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.571 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.571 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.571 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.571 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.571 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.571 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.571 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.831 00:16:43.831 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.123 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.123 07:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.123 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.123 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.123 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.123 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.123 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.123 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.123 { 00:16:44.123 "cntlid": 81, 00:16:44.123 "qid": 0, 00:16:44.123 "state": "enabled", 00:16:44.123 "thread": "nvmf_tgt_poll_group_000", 00:16:44.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:44.123 "listen_address": { 00:16:44.123 "trtype": "TCP", 00:16:44.123 "adrfam": "IPv4", 00:16:44.123 "traddr": "10.0.0.2", 00:16:44.123 "trsvcid": "4420" 00:16:44.123 }, 00:16:44.123 "peer_address": { 00:16:44.123 "trtype": "TCP", 00:16:44.123 "adrfam": "IPv4", 00:16:44.123 "traddr": "10.0.0.1", 00:16:44.123 "trsvcid": "48706" 00:16:44.123 }, 00:16:44.123 "auth": { 00:16:44.123 "state": "completed", 00:16:44.123 "digest": "sha384", 00:16:44.123 "dhgroup": "ffdhe6144" 00:16:44.123 } 00:16:44.123 } 00:16:44.123 ]' 00:16:44.123 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.123 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.123 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.382 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:44.382 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.382 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.382 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.382 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.382 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:16:44.382 07:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:16:45.318 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.319 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.577 00:16:45.577 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.577 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.577 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.836 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.836 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.836 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.836 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.836 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.836 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.836 { 00:16:45.836 "cntlid": 83, 00:16:45.836 "qid": 0, 00:16:45.836 "state": "enabled", 00:16:45.836 "thread": "nvmf_tgt_poll_group_000", 00:16:45.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.836 "listen_address": { 00:16:45.836 "trtype": "TCP", 00:16:45.836 "adrfam": "IPv4", 00:16:45.836 "traddr": "10.0.0.2", 00:16:45.836 "trsvcid": "4420" 00:16:45.836 }, 00:16:45.836 "peer_address": { 00:16:45.836 "trtype": "TCP", 00:16:45.836 "adrfam": "IPv4", 00:16:45.836 "traddr": "10.0.0.1", 00:16:45.836 "trsvcid": "48722" 00:16:45.836 }, 00:16:45.836 "auth": { 00:16:45.836 "state": "completed", 00:16:45.836 "digest": "sha384", 00:16:45.836 "dhgroup": "ffdhe6144" 00:16:45.836 } 00:16:45.836 } 00:16:45.836 ]' 00:16:45.836 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.836 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.836 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.095 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:46.095 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.095 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.095 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.095 07:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.095 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:46.095 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:46.663 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.663 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.663 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.663 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.922 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.922 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.922 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:46.922 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:46.922 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:46.922 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.922 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.922 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:46.922 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:46.922 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.922 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.922 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.922 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.922 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.922 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.922 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.922 07:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.490 00:16:47.490 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.490 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.490 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.490 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.490 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.490 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.490 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.490 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.490 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.490 { 00:16:47.490 "cntlid": 85, 00:16:47.490 "qid": 0, 00:16:47.490 "state": "enabled", 00:16:47.490 "thread": "nvmf_tgt_poll_group_000", 00:16:47.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:47.490 "listen_address": { 00:16:47.490 "trtype": "TCP", 00:16:47.490 "adrfam": "IPv4", 00:16:47.490 "traddr": "10.0.0.2", 00:16:47.490 "trsvcid": "4420" 00:16:47.490 }, 00:16:47.490 "peer_address": { 00:16:47.490 "trtype": "TCP", 00:16:47.490 "adrfam": "IPv4", 00:16:47.490 "traddr": "10.0.0.1", 00:16:47.490 "trsvcid": "48738" 00:16:47.490 }, 00:16:47.490 "auth": { 00:16:47.490 "state": "completed", 00:16:47.490 "digest": "sha384", 00:16:47.490 "dhgroup": "ffdhe6144" 00:16:47.490 } 00:16:47.490 } 00:16:47.490 ]' 00:16:47.490 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.490 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.490 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.750 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.750 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.750 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.750 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.750 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.009 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:16:48.009 07:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.578 07:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.147 00:16:49.147 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.147 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.147 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.147 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.147 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.147 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.147 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.147 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.147 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.147 { 00:16:49.147 "cntlid": 87, 00:16:49.147 "qid": 0, 00:16:49.147 "state": "enabled", 00:16:49.147 "thread": "nvmf_tgt_poll_group_000", 00:16:49.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:49.147 "listen_address": { 00:16:49.147 "trtype": "TCP", 00:16:49.147 "adrfam": "IPv4", 00:16:49.147 "traddr": "10.0.0.2", 00:16:49.147 "trsvcid": "4420" 00:16:49.147 }, 00:16:49.147 "peer_address": { 00:16:49.147 "trtype": "TCP", 00:16:49.147 "adrfam": "IPv4", 00:16:49.147 "traddr": "10.0.0.1", 00:16:49.147 "trsvcid": "48772" 00:16:49.147 }, 00:16:49.147 "auth": { 00:16:49.147 "state": "completed", 00:16:49.147 "digest": "sha384", 00:16:49.147 "dhgroup": "ffdhe6144" 00:16:49.147 } 00:16:49.147 } 00:16:49.147 ]' 00:16:49.147 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.406 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.406 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.406 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.406 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.406 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.406 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.406 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.666 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:16:49.666 07:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:16:50.234 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.234 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.234 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.234 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.234 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.234 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.234 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.234 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:50.234 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:50.234 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:50.234 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.234 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.234 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:50.234 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:50.234 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.234 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.234 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.234 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.493 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.493 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.493 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.493 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.752 00:16:50.752 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.752 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.752 07:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.010 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.011 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.011 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.011 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.011 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.011 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.011 { 00:16:51.011 "cntlid": 89, 00:16:51.011 "qid": 0, 00:16:51.011 "state": "enabled", 00:16:51.011 "thread": "nvmf_tgt_poll_group_000", 00:16:51.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:51.011 "listen_address": { 00:16:51.011 "trtype": "TCP", 00:16:51.011 "adrfam": "IPv4", 00:16:51.011 "traddr": "10.0.0.2", 00:16:51.011 "trsvcid": "4420" 00:16:51.011 }, 00:16:51.011 "peer_address": { 00:16:51.011 "trtype": "TCP", 00:16:51.011 "adrfam": "IPv4", 00:16:51.011 "traddr": "10.0.0.1", 00:16:51.011 "trsvcid": "49520" 00:16:51.011 }, 00:16:51.011 "auth": { 00:16:51.011 "state": "completed", 00:16:51.011 "digest": "sha384", 00:16:51.011 "dhgroup": "ffdhe8192" 00:16:51.011 } 00:16:51.011 } 00:16:51.011 ]' 00:16:51.011 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.011 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.011 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.270 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.270 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.270 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.270 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.270 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.270 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:16:51.270 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:16:51.837 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.837 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.837 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.837 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.096 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.096 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.096 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:52.096 07:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:52.096 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:52.096 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.096 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:52.096 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:52.096 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:52.096 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.096 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.096 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.096 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.096 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.096 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.096 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.096 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.664 00:16:52.664 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.664 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.664 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.923 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.923 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.923 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.923 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.923 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.923 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.923 { 00:16:52.923 "cntlid": 91, 00:16:52.923 "qid": 0, 00:16:52.923 "state": "enabled", 00:16:52.923 "thread": "nvmf_tgt_poll_group_000", 00:16:52.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:52.923 "listen_address": { 00:16:52.923 "trtype": "TCP", 00:16:52.923 "adrfam": "IPv4", 00:16:52.923 "traddr": "10.0.0.2", 00:16:52.923 "trsvcid": "4420" 00:16:52.923 }, 00:16:52.923 "peer_address": { 00:16:52.923 "trtype": "TCP", 00:16:52.923 "adrfam": "IPv4", 00:16:52.923 "traddr": "10.0.0.1", 00:16:52.923 "trsvcid": "49538" 00:16:52.923 }, 00:16:52.923 "auth": { 00:16:52.923 "state": "completed", 00:16:52.923 "digest": "sha384", 00:16:52.923 "dhgroup": "ffdhe8192" 00:16:52.923 } 00:16:52.923 } 00:16:52.923 ]' 00:16:52.923 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.923 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.923 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.923 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.923 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.923 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.923 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.923 07:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.183 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:53.183 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:53.752 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.752 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.752 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.752 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.752 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.753 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.753 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:53.753 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:54.011 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:54.011 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.012 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:54.012 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:54.012 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:54.012 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.012 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.012 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.012 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.012 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.012 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.012 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.012 07:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.579 00:16:54.579 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.579 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.579 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.579 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.579 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.579 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.579 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.579 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.579 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.579 { 00:16:54.579 "cntlid": 93, 00:16:54.579 "qid": 0, 00:16:54.579 "state": "enabled", 00:16:54.579 "thread": "nvmf_tgt_poll_group_000", 00:16:54.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:54.579 "listen_address": { 00:16:54.579 "trtype": "TCP", 00:16:54.579 "adrfam": "IPv4", 00:16:54.579 "traddr": "10.0.0.2", 00:16:54.579 "trsvcid": "4420" 00:16:54.579 }, 00:16:54.580 "peer_address": { 00:16:54.580 "trtype": "TCP", 00:16:54.580 "adrfam": "IPv4", 00:16:54.580 "traddr": "10.0.0.1", 00:16:54.580 "trsvcid": "49564" 00:16:54.580 }, 00:16:54.580 "auth": { 00:16:54.580 "state": "completed", 00:16:54.580 "digest": "sha384", 00:16:54.580 "dhgroup": "ffdhe8192" 00:16:54.580 } 00:16:54.580 } 00:16:54.580 ]' 00:16:54.580 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.839 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.839 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.839 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.839 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.839 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.839 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.839 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.839 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:16:55.098 07:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:16:55.664 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.664 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.665 07:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.233 00:16:56.233 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.233 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.233 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.493 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.493 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.493 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.493 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.493 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.493 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.493 { 00:16:56.493 "cntlid": 95, 00:16:56.493 "qid": 0, 00:16:56.493 "state": "enabled", 00:16:56.493 "thread": "nvmf_tgt_poll_group_000", 00:16:56.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:56.493 "listen_address": { 00:16:56.493 "trtype": "TCP", 00:16:56.493 "adrfam": "IPv4", 00:16:56.493 "traddr": "10.0.0.2", 00:16:56.493 "trsvcid": "4420" 00:16:56.493 }, 00:16:56.493 "peer_address": { 00:16:56.493 "trtype": "TCP", 00:16:56.493 "adrfam": "IPv4", 00:16:56.493 "traddr": "10.0.0.1", 00:16:56.493 "trsvcid": "49582" 00:16:56.493 }, 00:16:56.493 "auth": { 00:16:56.493 "state": "completed", 00:16:56.493 "digest": "sha384", 00:16:56.493 "dhgroup": "ffdhe8192" 00:16:56.493 } 00:16:56.493 } 00:16:56.493 ]' 00:16:56.493 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.493 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.493 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.493 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.493 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.493 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.493 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.493 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.753 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:16:56.753 07:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:16:57.320 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.320 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.320 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.320 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.320 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.320 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:57.320 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.320 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.320 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:57.320 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:57.578 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:57.578 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.578 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.578 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:57.578 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:57.578 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.578 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.578 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.578 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.578 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.578 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.578 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.578 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.836 00:16:57.836 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.836 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.836 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.095 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.095 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.095 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.095 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.095 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.095 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.095 { 00:16:58.095 "cntlid": 97, 00:16:58.095 "qid": 0, 00:16:58.095 "state": "enabled", 00:16:58.095 "thread": "nvmf_tgt_poll_group_000", 00:16:58.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:58.095 "listen_address": { 00:16:58.095 "trtype": "TCP", 00:16:58.095 "adrfam": "IPv4", 00:16:58.095 "traddr": "10.0.0.2", 00:16:58.095 "trsvcid": "4420" 00:16:58.095 }, 00:16:58.095 "peer_address": { 00:16:58.095 "trtype": "TCP", 00:16:58.095 "adrfam": "IPv4", 00:16:58.095 "traddr": "10.0.0.1", 00:16:58.095 "trsvcid": "49608" 00:16:58.095 }, 00:16:58.095 "auth": { 00:16:58.095 "state": "completed", 00:16:58.095 "digest": "sha512", 00:16:58.095 "dhgroup": "null" 00:16:58.095 } 00:16:58.095 } 00:16:58.095 ]' 00:16:58.095 07:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.095 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.095 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.095 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:58.095 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.095 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.095 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.095 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.354 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:16:58.354 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:16:58.921 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.921 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.921 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.921 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.921 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.921 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.921 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:58.921 07:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:59.179 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:59.179 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.179 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.179 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:59.179 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:59.179 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.179 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.179 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.179 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.179 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.179 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.179 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.179 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.437 00:16:59.437 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.437 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.437 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.695 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.695 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.695 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.695 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.695 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.695 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.695 { 00:16:59.695 "cntlid": 99, 00:16:59.695 "qid": 0, 00:16:59.695 "state": "enabled", 00:16:59.695 "thread": "nvmf_tgt_poll_group_000", 00:16:59.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:59.695 "listen_address": { 00:16:59.695 "trtype": "TCP", 00:16:59.695 "adrfam": "IPv4", 00:16:59.695 "traddr": "10.0.0.2", 00:16:59.695 "trsvcid": "4420" 00:16:59.695 }, 00:16:59.695 "peer_address": { 00:16:59.695 "trtype": "TCP", 00:16:59.695 "adrfam": "IPv4", 00:16:59.695 "traddr": "10.0.0.1", 00:16:59.695 "trsvcid": "49638" 00:16:59.695 }, 00:16:59.695 "auth": { 00:16:59.695 "state": "completed", 00:16:59.695 "digest": "sha512", 00:16:59.695 "dhgroup": "null" 00:16:59.695 } 00:16:59.695 } 00:16:59.695 ]' 00:16:59.695 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.695 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.695 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.695 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:59.695 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.695 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.695 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.695 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.953 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:16:59.953 07:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:17:00.522 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.523 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.523 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.523 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.523 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.523 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.523 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:00.523 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:00.782 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:00.782 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.783 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.783 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:00.783 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:00.783 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.783 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.783 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.783 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.783 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.783 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.783 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.783 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.042 00:17:01.042 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.042 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.042 07:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.301 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.301 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.301 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.301 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.301 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.301 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.301 { 00:17:01.301 "cntlid": 101, 00:17:01.301 "qid": 0, 00:17:01.301 "state": "enabled", 00:17:01.301 "thread": "nvmf_tgt_poll_group_000", 00:17:01.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:01.301 "listen_address": { 00:17:01.301 "trtype": "TCP", 00:17:01.301 "adrfam": "IPv4", 00:17:01.301 "traddr": "10.0.0.2", 00:17:01.301 "trsvcid": "4420" 00:17:01.301 }, 00:17:01.301 "peer_address": { 00:17:01.301 "trtype": "TCP", 00:17:01.301 "adrfam": "IPv4", 00:17:01.301 "traddr": "10.0.0.1", 00:17:01.301 "trsvcid": "42054" 00:17:01.301 }, 00:17:01.301 "auth": { 00:17:01.301 "state": "completed", 00:17:01.301 "digest": "sha512", 00:17:01.301 "dhgroup": "null" 00:17:01.301 } 00:17:01.301 } 00:17:01.301 ]' 00:17:01.301 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.301 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.301 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.301 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:01.301 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.301 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.301 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.301 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.561 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:17:01.561 07:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:17:02.129 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.129 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.129 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.129 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.129 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.129 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.129 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:02.129 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:02.388 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:02.388 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.388 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.388 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:02.388 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:02.388 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.388 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:02.388 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.388 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.388 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.388 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:02.388 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.389 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.648 00:17:02.648 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.648 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.648 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.648 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.648 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.648 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.648 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.906 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.906 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.906 { 00:17:02.906 "cntlid": 103, 00:17:02.906 "qid": 0, 00:17:02.906 "state": "enabled", 00:17:02.906 "thread": "nvmf_tgt_poll_group_000", 00:17:02.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:02.906 "listen_address": { 00:17:02.906 "trtype": "TCP", 00:17:02.906 "adrfam": "IPv4", 00:17:02.906 "traddr": "10.0.0.2", 00:17:02.906 "trsvcid": "4420" 00:17:02.906 }, 00:17:02.906 "peer_address": { 00:17:02.906 "trtype": "TCP", 00:17:02.906 "adrfam": "IPv4", 00:17:02.906 "traddr": "10.0.0.1", 00:17:02.906 "trsvcid": "42076" 00:17:02.906 }, 00:17:02.906 "auth": { 00:17:02.906 "state": "completed", 00:17:02.906 "digest": "sha512", 00:17:02.906 "dhgroup": "null" 00:17:02.906 } 00:17:02.906 } 00:17:02.906 ]' 00:17:02.906 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.906 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.906 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.906 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:02.906 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.906 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.906 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.906 07:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.165 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:17:03.165 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:17:03.733 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.733 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.733 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.733 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.733 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.733 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.733 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.733 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:03.733 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:03.992 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:03.992 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.992 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.992 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:03.992 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.992 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.992 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.992 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.992 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.992 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.992 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.992 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.992 07:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.251 00:17:04.251 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.251 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.251 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.251 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.251 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.251 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.251 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.251 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.251 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.251 { 00:17:04.251 "cntlid": 105, 00:17:04.251 "qid": 0, 00:17:04.251 "state": "enabled", 00:17:04.251 "thread": "nvmf_tgt_poll_group_000", 00:17:04.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:04.251 "listen_address": { 00:17:04.251 "trtype": "TCP", 00:17:04.251 "adrfam": "IPv4", 00:17:04.251 "traddr": "10.0.0.2", 00:17:04.251 "trsvcid": "4420" 00:17:04.251 }, 00:17:04.251 "peer_address": { 00:17:04.251 "trtype": "TCP", 00:17:04.251 "adrfam": "IPv4", 00:17:04.251 "traddr": "10.0.0.1", 00:17:04.251 "trsvcid": "42106" 00:17:04.251 }, 00:17:04.252 "auth": { 00:17:04.252 "state": "completed", 00:17:04.252 "digest": "sha512", 00:17:04.252 "dhgroup": "ffdhe2048" 00:17:04.252 } 00:17:04.252 } 00:17:04.252 ]' 00:17:04.252 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.510 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.510 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.510 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:04.510 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.510 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.510 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.510 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.770 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:17:04.770 07:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:17:05.338 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.338 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.338 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.338 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.338 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.338 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.338 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:05.338 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:05.338 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:05.339 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.339 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.339 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:05.339 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:05.339 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.339 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.339 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.339 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.339 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.339 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.339 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.339 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.597 00:17:05.597 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.597 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.597 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.855 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.855 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.855 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.855 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.855 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.855 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.855 { 00:17:05.855 "cntlid": 107, 00:17:05.855 "qid": 0, 00:17:05.855 "state": "enabled", 00:17:05.855 "thread": "nvmf_tgt_poll_group_000", 00:17:05.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:05.855 "listen_address": { 00:17:05.856 "trtype": "TCP", 00:17:05.856 "adrfam": "IPv4", 00:17:05.856 "traddr": "10.0.0.2", 00:17:05.856 "trsvcid": "4420" 00:17:05.856 }, 00:17:05.856 "peer_address": { 00:17:05.856 "trtype": "TCP", 00:17:05.856 "adrfam": "IPv4", 00:17:05.856 "traddr": "10.0.0.1", 00:17:05.856 "trsvcid": "42128" 00:17:05.856 }, 00:17:05.856 "auth": { 00:17:05.856 "state": "completed", 00:17:05.856 "digest": "sha512", 00:17:05.856 "dhgroup": "ffdhe2048" 00:17:05.856 } 00:17:05.856 } 00:17:05.856 ]' 00:17:05.856 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.856 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.856 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.115 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:06.115 07:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.115 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.115 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.115 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.115 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:17:06.115 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:17:07.051 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.051 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.051 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.051 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.051 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.051 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.051 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.051 08:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.051 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:07.051 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.051 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.051 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:07.051 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:07.051 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.051 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.052 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.052 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.052 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.052 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.052 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.052 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.310 00:17:07.310 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.310 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.310 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.569 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.569 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.569 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.569 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.569 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.569 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.569 { 00:17:07.569 "cntlid": 109, 00:17:07.569 "qid": 0, 00:17:07.569 "state": "enabled", 00:17:07.569 "thread": "nvmf_tgt_poll_group_000", 00:17:07.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:07.569 "listen_address": { 00:17:07.569 "trtype": "TCP", 00:17:07.569 "adrfam": "IPv4", 00:17:07.569 "traddr": "10.0.0.2", 00:17:07.569 "trsvcid": "4420" 00:17:07.569 }, 00:17:07.569 "peer_address": { 00:17:07.569 "trtype": "TCP", 00:17:07.569 "adrfam": "IPv4", 00:17:07.569 "traddr": "10.0.0.1", 00:17:07.569 "trsvcid": "42164" 00:17:07.569 }, 00:17:07.569 "auth": { 00:17:07.569 "state": "completed", 00:17:07.569 "digest": "sha512", 00:17:07.569 "dhgroup": "ffdhe2048" 00:17:07.569 } 00:17:07.569 } 00:17:07.569 ]' 00:17:07.569 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.569 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.569 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.569 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.569 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.569 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.569 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.569 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.828 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:17:07.828 08:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:17:08.397 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.397 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.397 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.397 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.397 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.397 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.397 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.397 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.656 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:08.656 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.656 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:08.656 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:08.656 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:08.656 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.656 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:08.656 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.656 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.656 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.656 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.656 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.656 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.914 00:17:08.914 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.914 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.915 08:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.174 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.174 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.174 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.174 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.174 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.174 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.174 { 00:17:09.174 "cntlid": 111, 00:17:09.174 "qid": 0, 00:17:09.174 "state": "enabled", 00:17:09.174 "thread": "nvmf_tgt_poll_group_000", 00:17:09.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:09.174 "listen_address": { 00:17:09.174 "trtype": "TCP", 00:17:09.174 "adrfam": "IPv4", 00:17:09.174 "traddr": "10.0.0.2", 00:17:09.174 "trsvcid": "4420" 00:17:09.174 }, 00:17:09.174 "peer_address": { 00:17:09.174 "trtype": "TCP", 00:17:09.174 "adrfam": "IPv4", 00:17:09.174 "traddr": "10.0.0.1", 00:17:09.174 "trsvcid": "42194" 00:17:09.174 }, 00:17:09.174 "auth": { 00:17:09.174 "state": "completed", 00:17:09.174 "digest": "sha512", 00:17:09.174 "dhgroup": "ffdhe2048" 00:17:09.174 } 00:17:09.174 } 00:17:09.174 ]' 00:17:09.174 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.174 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.174 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.174 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.174 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.174 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.175 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.175 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.433 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:17:09.433 08:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:17:10.001 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.001 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.001 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.001 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.001 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.001 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.001 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.001 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:10.001 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:10.259 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:10.259 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.259 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.259 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:10.259 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.259 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.259 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.259 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.259 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.259 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.259 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.259 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.259 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.518 00:17:10.518 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.518 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.518 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.778 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.778 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.779 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.779 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.779 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.779 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.779 { 00:17:10.779 "cntlid": 113, 00:17:10.779 "qid": 0, 00:17:10.779 "state": "enabled", 00:17:10.779 "thread": "nvmf_tgt_poll_group_000", 00:17:10.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:10.779 "listen_address": { 00:17:10.779 "trtype": "TCP", 00:17:10.779 "adrfam": "IPv4", 00:17:10.779 "traddr": "10.0.0.2", 00:17:10.779 "trsvcid": "4420" 00:17:10.779 }, 00:17:10.779 "peer_address": { 00:17:10.779 "trtype": "TCP", 00:17:10.779 "adrfam": "IPv4", 00:17:10.779 "traddr": "10.0.0.1", 00:17:10.779 "trsvcid": "34482" 00:17:10.779 }, 00:17:10.779 "auth": { 00:17:10.779 "state": "completed", 00:17:10.779 "digest": "sha512", 00:17:10.779 "dhgroup": "ffdhe3072" 00:17:10.779 } 00:17:10.779 } 00:17:10.779 ]' 00:17:10.779 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.779 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.779 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.779 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:10.779 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.779 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.779 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.779 08:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.038 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:17:11.038 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:17:11.607 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.607 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.607 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.607 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.607 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.607 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.607 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:11.607 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:11.866 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:11.866 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.866 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.866 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:11.866 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:11.866 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.866 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.866 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.866 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.866 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.866 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.866 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.866 08:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.125 00:17:12.125 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.125 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.125 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.460 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.460 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.460 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.460 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.460 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.460 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.460 { 00:17:12.460 "cntlid": 115, 00:17:12.460 "qid": 0, 00:17:12.460 "state": "enabled", 00:17:12.460 "thread": "nvmf_tgt_poll_group_000", 00:17:12.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:12.460 "listen_address": { 00:17:12.460 "trtype": "TCP", 00:17:12.460 "adrfam": "IPv4", 00:17:12.460 "traddr": "10.0.0.2", 00:17:12.460 "trsvcid": "4420" 00:17:12.460 }, 00:17:12.460 "peer_address": { 00:17:12.460 "trtype": "TCP", 00:17:12.460 "adrfam": "IPv4", 00:17:12.460 "traddr": "10.0.0.1", 00:17:12.460 "trsvcid": "34504" 00:17:12.460 }, 00:17:12.460 "auth": { 00:17:12.460 "state": "completed", 00:17:12.460 "digest": "sha512", 00:17:12.460 "dhgroup": "ffdhe3072" 00:17:12.460 } 00:17:12.460 } 00:17:12.460 ]' 00:17:12.460 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.460 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.460 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.460 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:12.460 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.460 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.460 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.460 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.747 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:17:12.747 08:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.314 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.573 00:17:13.573 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.573 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.573 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.832 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.832 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.832 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.832 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.832 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.832 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.832 { 00:17:13.832 "cntlid": 117, 00:17:13.832 "qid": 0, 00:17:13.832 "state": "enabled", 00:17:13.832 "thread": "nvmf_tgt_poll_group_000", 00:17:13.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:13.832 "listen_address": { 00:17:13.832 "trtype": "TCP", 00:17:13.832 "adrfam": "IPv4", 00:17:13.832 "traddr": "10.0.0.2", 00:17:13.832 "trsvcid": "4420" 00:17:13.832 }, 00:17:13.832 "peer_address": { 00:17:13.832 "trtype": "TCP", 00:17:13.832 "adrfam": "IPv4", 00:17:13.832 "traddr": "10.0.0.1", 00:17:13.832 "trsvcid": "34532" 00:17:13.832 }, 00:17:13.832 "auth": { 00:17:13.832 "state": "completed", 00:17:13.832 "digest": "sha512", 00:17:13.832 "dhgroup": "ffdhe3072" 00:17:13.832 } 00:17:13.832 } 00:17:13.832 ]' 00:17:13.832 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.832 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.832 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.090 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.090 08:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.090 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.090 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.090 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.350 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:17:14.350 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.917 08:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.176 00:17:15.176 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.176 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.176 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.435 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.435 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.435 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.435 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.435 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.435 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.435 { 00:17:15.435 "cntlid": 119, 00:17:15.435 "qid": 0, 00:17:15.435 "state": "enabled", 00:17:15.435 "thread": "nvmf_tgt_poll_group_000", 00:17:15.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:15.435 "listen_address": { 00:17:15.435 "trtype": "TCP", 00:17:15.435 "adrfam": "IPv4", 00:17:15.435 "traddr": "10.0.0.2", 00:17:15.435 "trsvcid": "4420" 00:17:15.435 }, 00:17:15.435 "peer_address": { 00:17:15.435 "trtype": "TCP", 00:17:15.435 "adrfam": "IPv4", 00:17:15.435 "traddr": "10.0.0.1", 00:17:15.435 "trsvcid": "34554" 00:17:15.435 }, 00:17:15.435 "auth": { 00:17:15.435 "state": "completed", 00:17:15.435 "digest": "sha512", 00:17:15.435 "dhgroup": "ffdhe3072" 00:17:15.435 } 00:17:15.435 } 00:17:15.435 ]' 00:17:15.435 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.435 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.435 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.694 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.694 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.694 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.694 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.694 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.694 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:17:15.694 08:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:17:16.261 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.520 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.779 00:17:16.779 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.779 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.779 08:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.038 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.038 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.038 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.038 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.038 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.038 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.038 { 00:17:17.038 "cntlid": 121, 00:17:17.038 "qid": 0, 00:17:17.038 "state": "enabled", 00:17:17.038 "thread": "nvmf_tgt_poll_group_000", 00:17:17.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:17.038 "listen_address": { 00:17:17.038 "trtype": "TCP", 00:17:17.038 "adrfam": "IPv4", 00:17:17.038 "traddr": "10.0.0.2", 00:17:17.038 "trsvcid": "4420" 00:17:17.038 }, 00:17:17.038 "peer_address": { 00:17:17.038 "trtype": "TCP", 00:17:17.038 "adrfam": "IPv4", 00:17:17.038 "traddr": "10.0.0.1", 00:17:17.038 "trsvcid": "34584" 00:17:17.038 }, 00:17:17.038 "auth": { 00:17:17.038 "state": "completed", 00:17:17.038 "digest": "sha512", 00:17:17.038 "dhgroup": "ffdhe4096" 00:17:17.038 } 00:17:17.038 } 00:17:17.038 ]' 00:17:17.038 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.038 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.038 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.298 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:17.298 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.298 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.298 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.298 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.298 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:17:17.298 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:17:17.866 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.866 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.866 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.866 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.866 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.866 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.866 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.866 08:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:18.127 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:18.127 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.127 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.127 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:18.127 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:18.127 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.127 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.127 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.127 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.127 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.127 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.127 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.127 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.387 00:17:18.387 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.387 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.387 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.646 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.647 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.647 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.647 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.647 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.647 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.647 { 00:17:18.647 "cntlid": 123, 00:17:18.647 "qid": 0, 00:17:18.647 "state": "enabled", 00:17:18.647 "thread": "nvmf_tgt_poll_group_000", 00:17:18.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:18.647 "listen_address": { 00:17:18.647 "trtype": "TCP", 00:17:18.647 "adrfam": "IPv4", 00:17:18.647 "traddr": "10.0.0.2", 00:17:18.647 "trsvcid": "4420" 00:17:18.647 }, 00:17:18.647 "peer_address": { 00:17:18.647 "trtype": "TCP", 00:17:18.647 "adrfam": "IPv4", 00:17:18.647 "traddr": "10.0.0.1", 00:17:18.647 "trsvcid": "34596" 00:17:18.647 }, 00:17:18.647 "auth": { 00:17:18.647 "state": "completed", 00:17:18.647 "digest": "sha512", 00:17:18.647 "dhgroup": "ffdhe4096" 00:17:18.647 } 00:17:18.647 } 00:17:18.647 ]' 00:17:18.647 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.647 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.647 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.647 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:18.647 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.906 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.906 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.906 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.906 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:17:18.906 08:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:17:19.473 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.473 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.473 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.473 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.473 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.473 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.473 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:19.473 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:19.731 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:19.731 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.731 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:19.731 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:19.731 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:19.731 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.731 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.731 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.731 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.731 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.731 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.731 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.731 08:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.989 00:17:19.989 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.989 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.989 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.248 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.248 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.248 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.248 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.248 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.248 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.248 { 00:17:20.248 "cntlid": 125, 00:17:20.248 "qid": 0, 00:17:20.248 "state": "enabled", 00:17:20.248 "thread": "nvmf_tgt_poll_group_000", 00:17:20.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:20.248 "listen_address": { 00:17:20.248 "trtype": "TCP", 00:17:20.248 "adrfam": "IPv4", 00:17:20.248 "traddr": "10.0.0.2", 00:17:20.248 "trsvcid": "4420" 00:17:20.248 }, 00:17:20.248 "peer_address": { 00:17:20.248 "trtype": "TCP", 00:17:20.248 "adrfam": "IPv4", 00:17:20.248 "traddr": "10.0.0.1", 00:17:20.248 "trsvcid": "39824" 00:17:20.248 }, 00:17:20.248 "auth": { 00:17:20.248 "state": "completed", 00:17:20.248 "digest": "sha512", 00:17:20.248 "dhgroup": "ffdhe4096" 00:17:20.248 } 00:17:20.248 } 00:17:20.248 ]' 00:17:20.248 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.248 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.248 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.248 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.248 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.507 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.507 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.507 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.507 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:17:20.507 08:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:17:21.105 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.105 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.105 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.105 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.105 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.105 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.105 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:21.105 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:21.364 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:21.364 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.364 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.364 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:21.364 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:21.364 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.364 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:21.364 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.364 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.364 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.364 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:21.364 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.364 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.623 00:17:21.623 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.623 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.623 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.882 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.882 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.882 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.882 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.882 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.882 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.882 { 00:17:21.882 "cntlid": 127, 00:17:21.882 "qid": 0, 00:17:21.882 "state": "enabled", 00:17:21.882 "thread": "nvmf_tgt_poll_group_000", 00:17:21.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:21.882 "listen_address": { 00:17:21.882 "trtype": "TCP", 00:17:21.882 "adrfam": "IPv4", 00:17:21.882 "traddr": "10.0.0.2", 00:17:21.882 "trsvcid": "4420" 00:17:21.882 }, 00:17:21.882 "peer_address": { 00:17:21.882 "trtype": "TCP", 00:17:21.882 "adrfam": "IPv4", 00:17:21.882 "traddr": "10.0.0.1", 00:17:21.882 "trsvcid": "39860" 00:17:21.882 }, 00:17:21.882 "auth": { 00:17:21.882 "state": "completed", 00:17:21.882 "digest": "sha512", 00:17:21.882 "dhgroup": "ffdhe4096" 00:17:21.882 } 00:17:21.882 } 00:17:21.882 ]' 00:17:21.882 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.882 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.882 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.882 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.882 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.882 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.882 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.882 08:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.141 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:17:22.141 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:17:22.710 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.710 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.710 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.710 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.710 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.710 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.710 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.710 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:22.710 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:22.969 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:22.969 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.969 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.969 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:22.969 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.969 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.969 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.969 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.969 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.969 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.969 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.969 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.969 08:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.228 00:17:23.228 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.228 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.228 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.487 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.488 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.488 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.488 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.488 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.488 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.488 { 00:17:23.488 "cntlid": 129, 00:17:23.488 "qid": 0, 00:17:23.488 "state": "enabled", 00:17:23.488 "thread": "nvmf_tgt_poll_group_000", 00:17:23.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:23.488 "listen_address": { 00:17:23.488 "trtype": "TCP", 00:17:23.488 "adrfam": "IPv4", 00:17:23.488 "traddr": "10.0.0.2", 00:17:23.488 "trsvcid": "4420" 00:17:23.488 }, 00:17:23.488 "peer_address": { 00:17:23.488 "trtype": "TCP", 00:17:23.488 "adrfam": "IPv4", 00:17:23.488 "traddr": "10.0.0.1", 00:17:23.488 "trsvcid": "39880" 00:17:23.488 }, 00:17:23.488 "auth": { 00:17:23.488 "state": "completed", 00:17:23.488 "digest": "sha512", 00:17:23.488 "dhgroup": "ffdhe6144" 00:17:23.488 } 00:17:23.488 } 00:17:23.488 ]' 00:17:23.488 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.488 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.488 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.488 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:23.488 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.747 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.747 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.747 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.747 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:17:23.747 08:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:17:24.314 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.314 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.314 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.314 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.314 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.314 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.314 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:24.314 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:24.573 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:24.573 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.573 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.573 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:24.573 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:24.573 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.573 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.573 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.573 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.573 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.573 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.573 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.573 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.142 00:17:25.142 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.142 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.142 08:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.142 08:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.142 08:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.142 08:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.142 08:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.142 08:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.142 08:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.142 { 00:17:25.142 "cntlid": 131, 00:17:25.142 "qid": 0, 00:17:25.142 "state": "enabled", 00:17:25.142 "thread": "nvmf_tgt_poll_group_000", 00:17:25.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:25.142 "listen_address": { 00:17:25.142 "trtype": "TCP", 00:17:25.142 "adrfam": "IPv4", 00:17:25.142 "traddr": "10.0.0.2", 00:17:25.142 "trsvcid": "4420" 00:17:25.142 }, 00:17:25.142 "peer_address": { 00:17:25.142 "trtype": "TCP", 00:17:25.142 "adrfam": "IPv4", 00:17:25.142 "traddr": "10.0.0.1", 00:17:25.142 "trsvcid": "39904" 00:17:25.142 }, 00:17:25.142 "auth": { 00:17:25.142 "state": "completed", 00:17:25.142 "digest": "sha512", 00:17:25.142 "dhgroup": "ffdhe6144" 00:17:25.142 } 00:17:25.142 } 00:17:25.142 ]' 00:17:25.142 08:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.142 08:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.142 08:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.401 08:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:25.401 08:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.401 08:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.401 08:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.401 08:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.401 08:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:17:25.401 08:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:17:25.969 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.229 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.797 00:17:26.797 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.797 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.797 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.797 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.797 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.797 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.797 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.797 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.797 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.797 { 00:17:26.797 "cntlid": 133, 00:17:26.797 "qid": 0, 00:17:26.797 "state": "enabled", 00:17:26.797 "thread": "nvmf_tgt_poll_group_000", 00:17:26.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:26.797 "listen_address": { 00:17:26.797 "trtype": "TCP", 00:17:26.797 "adrfam": "IPv4", 00:17:26.797 "traddr": "10.0.0.2", 00:17:26.797 "trsvcid": "4420" 00:17:26.797 }, 00:17:26.797 "peer_address": { 00:17:26.797 "trtype": "TCP", 00:17:26.797 "adrfam": "IPv4", 00:17:26.797 "traddr": "10.0.0.1", 00:17:26.797 "trsvcid": "39918" 00:17:26.797 }, 00:17:26.797 "auth": { 00:17:26.797 "state": "completed", 00:17:26.797 "digest": "sha512", 00:17:26.797 "dhgroup": "ffdhe6144" 00:17:26.797 } 00:17:26.797 } 00:17:26.797 ]' 00:17:26.797 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.056 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.056 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.056 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:27.056 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.056 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.056 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.056 08:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.314 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:17:27.315 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:17:27.881 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.881 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.881 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.881 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.881 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.881 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.881 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.881 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.881 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:27.881 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.881 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.881 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:27.881 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:27.881 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.881 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:27.881 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.881 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.881 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.882 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:27.882 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.882 08:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.449 00:17:28.449 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.449 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.449 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.449 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.449 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.449 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.449 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.449 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.449 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.449 { 00:17:28.449 "cntlid": 135, 00:17:28.449 "qid": 0, 00:17:28.449 "state": "enabled", 00:17:28.449 "thread": "nvmf_tgt_poll_group_000", 00:17:28.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:28.449 "listen_address": { 00:17:28.449 "trtype": "TCP", 00:17:28.449 "adrfam": "IPv4", 00:17:28.449 "traddr": "10.0.0.2", 00:17:28.449 "trsvcid": "4420" 00:17:28.449 }, 00:17:28.449 "peer_address": { 00:17:28.449 "trtype": "TCP", 00:17:28.449 "adrfam": "IPv4", 00:17:28.449 "traddr": "10.0.0.1", 00:17:28.449 "trsvcid": "39946" 00:17:28.449 }, 00:17:28.449 "auth": { 00:17:28.449 "state": "completed", 00:17:28.449 "digest": "sha512", 00:17:28.449 "dhgroup": "ffdhe6144" 00:17:28.449 } 00:17:28.449 } 00:17:28.449 ]' 00:17:28.449 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.707 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.707 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.707 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.707 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.707 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.707 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.707 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.965 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:17:28.965 08:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.533 08:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.102 00:17:30.102 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.102 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.102 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.360 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.360 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.360 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.360 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.360 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.360 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.360 { 00:17:30.360 "cntlid": 137, 00:17:30.360 "qid": 0, 00:17:30.360 "state": "enabled", 00:17:30.360 "thread": "nvmf_tgt_poll_group_000", 00:17:30.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:30.360 "listen_address": { 00:17:30.360 "trtype": "TCP", 00:17:30.360 "adrfam": "IPv4", 00:17:30.360 "traddr": "10.0.0.2", 00:17:30.360 "trsvcid": "4420" 00:17:30.360 }, 00:17:30.360 "peer_address": { 00:17:30.360 "trtype": "TCP", 00:17:30.360 "adrfam": "IPv4", 00:17:30.360 "traddr": "10.0.0.1", 00:17:30.360 "trsvcid": "54000" 00:17:30.360 }, 00:17:30.360 "auth": { 00:17:30.360 "state": "completed", 00:17:30.360 "digest": "sha512", 00:17:30.360 "dhgroup": "ffdhe8192" 00:17:30.360 } 00:17:30.360 } 00:17:30.360 ]' 00:17:30.360 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.360 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.360 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.360 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.360 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.360 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.360 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.360 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.619 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:17:30.619 08:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:17:31.187 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.187 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.187 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.187 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.187 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.187 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.187 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.187 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.446 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:31.446 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.446 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.446 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:31.446 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:31.446 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.446 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.446 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.446 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.446 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.446 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.446 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.446 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.013 00:17:32.013 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.013 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.013 08:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.013 08:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.013 08:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.013 08:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.013 08:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.013 08:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.013 08:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.013 { 00:17:32.013 "cntlid": 139, 00:17:32.013 "qid": 0, 00:17:32.013 "state": "enabled", 00:17:32.013 "thread": "nvmf_tgt_poll_group_000", 00:17:32.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:32.013 "listen_address": { 00:17:32.013 "trtype": "TCP", 00:17:32.013 "adrfam": "IPv4", 00:17:32.013 "traddr": "10.0.0.2", 00:17:32.013 "trsvcid": "4420" 00:17:32.013 }, 00:17:32.013 "peer_address": { 00:17:32.013 "trtype": "TCP", 00:17:32.013 "adrfam": "IPv4", 00:17:32.013 "traddr": "10.0.0.1", 00:17:32.013 "trsvcid": "54040" 00:17:32.013 }, 00:17:32.013 "auth": { 00:17:32.013 "state": "completed", 00:17:32.013 "digest": "sha512", 00:17:32.013 "dhgroup": "ffdhe8192" 00:17:32.013 } 00:17:32.013 } 00:17:32.013 ]' 00:17:32.013 08:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.272 08:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.272 08:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.272 08:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.272 08:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.272 08:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.272 08:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.272 08:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.531 08:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:17:32.531 08:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: --dhchap-ctrl-secret DHHC-1:02:YjY5Y2VjNDY4NGJjNWY3YjVkMjg4YzBiZGZhNDJlODU5ODJlZmMxNTFlNTUxNDI3hkvT+Q==: 00:17:33.097 08:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.097 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.098 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.098 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.098 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.098 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.098 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.098 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.356 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:33.356 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.356 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.356 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:33.356 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:33.356 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.356 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.356 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.356 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.356 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.356 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.356 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.356 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.614 00:17:33.873 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.873 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.873 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.874 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.874 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.874 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.874 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.874 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.874 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.874 { 00:17:33.874 "cntlid": 141, 00:17:33.874 "qid": 0, 00:17:33.874 "state": "enabled", 00:17:33.874 "thread": "nvmf_tgt_poll_group_000", 00:17:33.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:33.874 "listen_address": { 00:17:33.874 "trtype": "TCP", 00:17:33.874 "adrfam": "IPv4", 00:17:33.874 "traddr": "10.0.0.2", 00:17:33.874 "trsvcid": "4420" 00:17:33.874 }, 00:17:33.874 "peer_address": { 00:17:33.874 "trtype": "TCP", 00:17:33.874 "adrfam": "IPv4", 00:17:33.874 "traddr": "10.0.0.1", 00:17:33.874 "trsvcid": "54080" 00:17:33.874 }, 00:17:33.874 "auth": { 00:17:33.874 "state": "completed", 00:17:33.874 "digest": "sha512", 00:17:33.874 "dhgroup": "ffdhe8192" 00:17:33.874 } 00:17:33.874 } 00:17:33.874 ]' 00:17:33.874 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.133 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.133 08:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.133 08:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.133 08:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.133 08:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.133 08:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.133 08:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.391 08:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:17:34.391 08:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:01:YzJlMDM0ODZmODljMjEwZWM4MzIzMmYzYWRiNTY2ZTbKTcKc: 00:17:34.958 08:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.958 08:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.958 08:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.958 08:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.958 08:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.958 08:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.958 08:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:34.958 08:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:34.958 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:34.958 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.958 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.959 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:34.959 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:34.959 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.959 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:34.959 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.959 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.959 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.959 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.959 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.959 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.526 00:17:35.526 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.526 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.526 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.785 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.785 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.785 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.785 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.785 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.785 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.785 { 00:17:35.785 "cntlid": 143, 00:17:35.785 "qid": 0, 00:17:35.785 "state": "enabled", 00:17:35.785 "thread": "nvmf_tgt_poll_group_000", 00:17:35.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:35.785 "listen_address": { 00:17:35.785 "trtype": "TCP", 00:17:35.785 "adrfam": "IPv4", 00:17:35.785 "traddr": "10.0.0.2", 00:17:35.785 "trsvcid": "4420" 00:17:35.785 }, 00:17:35.785 "peer_address": { 00:17:35.785 "trtype": "TCP", 00:17:35.785 "adrfam": "IPv4", 00:17:35.785 "traddr": "10.0.0.1", 00:17:35.785 "trsvcid": "54112" 00:17:35.785 }, 00:17:35.785 "auth": { 00:17:35.785 "state": "completed", 00:17:35.785 "digest": "sha512", 00:17:35.785 "dhgroup": "ffdhe8192" 00:17:35.785 } 00:17:35.785 } 00:17:35.785 ]' 00:17:35.785 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.785 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.785 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.785 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:35.785 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.785 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.785 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.785 08:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.044 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:17:36.044 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:17:36.609 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.609 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:36.609 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.609 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.609 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.609 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:36.609 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:36.609 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:36.609 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:36.609 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:36.609 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:36.868 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:36.868 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.868 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.868 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:36.868 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:36.868 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.868 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.868 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.868 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.868 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.868 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.868 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.868 08:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.438 00:17:37.438 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.438 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.438 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.698 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.698 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.698 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.698 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.698 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.698 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.698 { 00:17:37.698 "cntlid": 145, 00:17:37.698 "qid": 0, 00:17:37.698 "state": "enabled", 00:17:37.698 "thread": "nvmf_tgt_poll_group_000", 00:17:37.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:37.698 "listen_address": { 00:17:37.698 "trtype": "TCP", 00:17:37.698 "adrfam": "IPv4", 00:17:37.698 "traddr": "10.0.0.2", 00:17:37.698 "trsvcid": "4420" 00:17:37.698 }, 00:17:37.698 "peer_address": { 00:17:37.698 "trtype": "TCP", 00:17:37.698 "adrfam": "IPv4", 00:17:37.698 "traddr": "10.0.0.1", 00:17:37.698 "trsvcid": "54144" 00:17:37.698 }, 00:17:37.698 "auth": { 00:17:37.698 "state": "completed", 00:17:37.698 "digest": "sha512", 00:17:37.698 "dhgroup": "ffdhe8192" 00:17:37.698 } 00:17:37.698 } 00:17:37.698 ]' 00:17:37.698 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.698 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.698 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.698 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.698 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.698 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.698 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.698 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.958 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:17:37.958 08:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjdkYTI0ZGRhODc1MjdmNGVmNzJjNzI1NmM1MWE5NGJjOGM4NzdhMDAwMDQ4OWM5Rt+FuA==: --dhchap-ctrl-secret DHHC-1:03:YjJkZDk1ZTI0MGMwMTM0ZTI2MzA0MGYwODFlN2ZiNWNhMzhjM2VmOWU1ZTIyZjg4YWY2OGFkODk4MzE3MTlhNc0mB/I=: 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:38.526 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:39.094 request: 00:17:39.094 { 00:17:39.094 "name": "nvme0", 00:17:39.094 "trtype": "tcp", 00:17:39.094 "traddr": "10.0.0.2", 00:17:39.094 "adrfam": "ipv4", 00:17:39.094 "trsvcid": "4420", 00:17:39.094 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:39.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:39.094 "prchk_reftag": false, 00:17:39.094 "prchk_guard": false, 00:17:39.094 "hdgst": false, 00:17:39.094 "ddgst": false, 00:17:39.094 "dhchap_key": "key2", 00:17:39.094 "allow_unrecognized_csi": false, 00:17:39.094 "method": "bdev_nvme_attach_controller", 00:17:39.094 "req_id": 1 00:17:39.094 } 00:17:39.094 Got JSON-RPC error response 00:17:39.094 response: 00:17:39.094 { 00:17:39.094 "code": -5, 00:17:39.094 "message": "Input/output error" 00:17:39.094 } 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.094 08:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.353 request: 00:17:39.353 { 00:17:39.353 "name": "nvme0", 00:17:39.353 "trtype": "tcp", 00:17:39.353 "traddr": "10.0.0.2", 00:17:39.353 "adrfam": "ipv4", 00:17:39.353 "trsvcid": "4420", 00:17:39.353 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:39.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:39.353 "prchk_reftag": false, 00:17:39.353 "prchk_guard": false, 00:17:39.353 "hdgst": false, 00:17:39.353 "ddgst": false, 00:17:39.353 "dhchap_key": "key1", 00:17:39.353 "dhchap_ctrlr_key": "ckey2", 00:17:39.353 "allow_unrecognized_csi": false, 00:17:39.353 "method": "bdev_nvme_attach_controller", 00:17:39.353 "req_id": 1 00:17:39.353 } 00:17:39.353 Got JSON-RPC error response 00:17:39.353 response: 00:17:39.353 { 00:17:39.353 "code": -5, 00:17:39.353 "message": "Input/output error" 00:17:39.353 } 00:17:39.353 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:39.353 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:39.353 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:39.353 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:39.353 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.353 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.353 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.353 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.353 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:39.353 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.353 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.353 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.353 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.353 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:39.353 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.353 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:39.353 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.354 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:39.354 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.354 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.354 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.354 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.921 request: 00:17:39.921 { 00:17:39.921 "name": "nvme0", 00:17:39.921 "trtype": "tcp", 00:17:39.921 "traddr": "10.0.0.2", 00:17:39.921 "adrfam": "ipv4", 00:17:39.921 "trsvcid": "4420", 00:17:39.921 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:39.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:39.921 "prchk_reftag": false, 00:17:39.921 "prchk_guard": false, 00:17:39.921 "hdgst": false, 00:17:39.921 "ddgst": false, 00:17:39.921 "dhchap_key": "key1", 00:17:39.921 "dhchap_ctrlr_key": "ckey1", 00:17:39.921 "allow_unrecognized_csi": false, 00:17:39.921 "method": "bdev_nvme_attach_controller", 00:17:39.921 "req_id": 1 00:17:39.921 } 00:17:39.921 Got JSON-RPC error response 00:17:39.921 response: 00:17:39.921 { 00:17:39.921 "code": -5, 00:17:39.921 "message": "Input/output error" 00:17:39.921 } 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2429699 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2429699 ']' 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2429699 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2429699 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2429699' 00:17:39.921 killing process with pid 2429699 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2429699 00:17:39.921 08:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2429699 00:17:40.181 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:40.181 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:40.181 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:40.181 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.181 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2451726 00:17:40.181 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2451726 00:17:40.181 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2451726 ']' 00:17:40.181 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.181 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.181 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.181 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:40.181 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.181 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.440 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.440 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:40.440 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:40.440 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:40.440 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.440 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.440 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:40.440 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2451726 00:17:40.440 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2451726 ']' 00:17:40.440 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.440 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.440 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.440 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.440 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.699 null0 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.k8O 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.bGF ]] 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bGF 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.7kQ 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.xAR ]] 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xAR 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.GDc 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.oxD ]] 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oxD 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.699 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.rgr 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.700 08:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.635 nvme0n1 00:17:41.636 08:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.636 08:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.636 08:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.636 08:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.636 08:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.636 08:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.636 08:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.636 08:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.636 08:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.636 { 00:17:41.636 "cntlid": 1, 00:17:41.636 "qid": 0, 00:17:41.636 "state": "enabled", 00:17:41.636 "thread": "nvmf_tgt_poll_group_000", 00:17:41.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:41.636 "listen_address": { 00:17:41.636 "trtype": "TCP", 00:17:41.636 "adrfam": "IPv4", 00:17:41.636 "traddr": "10.0.0.2", 00:17:41.636 "trsvcid": "4420" 00:17:41.636 }, 00:17:41.636 "peer_address": { 00:17:41.636 "trtype": "TCP", 00:17:41.636 "adrfam": "IPv4", 00:17:41.636 "traddr": "10.0.0.1", 00:17:41.636 "trsvcid": "46808" 00:17:41.636 }, 00:17:41.636 "auth": { 00:17:41.636 "state": "completed", 00:17:41.636 "digest": "sha512", 00:17:41.636 "dhgroup": "ffdhe8192" 00:17:41.636 } 00:17:41.636 } 00:17:41.636 ]' 00:17:41.636 08:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.895 08:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.895 08:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.895 08:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.895 08:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.895 08:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.895 08:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.895 08:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.154 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:17:42.154 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:17:42.724 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.724 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.724 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.724 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.725 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.725 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:42.725 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.725 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.725 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.725 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:42.725 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:42.984 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:42.984 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:42.984 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:42.984 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:42.984 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.984 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:42.984 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.984 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.984 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.984 08:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.984 request: 00:17:42.984 { 00:17:42.984 "name": "nvme0", 00:17:42.984 "trtype": "tcp", 00:17:42.984 "traddr": "10.0.0.2", 00:17:42.984 "adrfam": "ipv4", 00:17:42.984 "trsvcid": "4420", 00:17:42.984 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:42.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:42.984 "prchk_reftag": false, 00:17:42.984 "prchk_guard": false, 00:17:42.984 "hdgst": false, 00:17:42.984 "ddgst": false, 00:17:42.984 "dhchap_key": "key3", 00:17:42.984 "allow_unrecognized_csi": false, 00:17:42.984 "method": "bdev_nvme_attach_controller", 00:17:42.984 "req_id": 1 00:17:42.984 } 00:17:42.984 Got JSON-RPC error response 00:17:42.984 response: 00:17:42.984 { 00:17:42.984 "code": -5, 00:17:42.984 "message": "Input/output error" 00:17:42.984 } 00:17:42.984 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:42.984 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:42.984 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:42.984 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:42.984 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:42.984 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:42.984 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:42.984 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:43.243 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:43.243 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:43.243 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:43.243 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:43.243 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.243 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:43.243 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.243 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:43.243 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.243 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.502 request: 00:17:43.502 { 00:17:43.502 "name": "nvme0", 00:17:43.502 "trtype": "tcp", 00:17:43.502 "traddr": "10.0.0.2", 00:17:43.502 "adrfam": "ipv4", 00:17:43.502 "trsvcid": "4420", 00:17:43.502 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:43.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:43.502 "prchk_reftag": false, 00:17:43.502 "prchk_guard": false, 00:17:43.502 "hdgst": false, 00:17:43.502 "ddgst": false, 00:17:43.502 "dhchap_key": "key3", 00:17:43.502 "allow_unrecognized_csi": false, 00:17:43.502 "method": "bdev_nvme_attach_controller", 00:17:43.502 "req_id": 1 00:17:43.502 } 00:17:43.502 Got JSON-RPC error response 00:17:43.502 response: 00:17:43.502 { 00:17:43.502 "code": -5, 00:17:43.502 "message": "Input/output error" 00:17:43.502 } 00:17:43.502 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:43.502 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:43.502 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:43.502 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:43.502 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:43.502 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:43.502 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:43.502 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:43.502 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:43.502 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:43.762 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.762 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.762 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.762 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.762 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.762 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.762 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.762 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.762 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:43.762 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:43.762 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:43.762 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:43.762 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.762 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:43.762 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.762 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:43.762 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:43.762 08:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:44.022 request: 00:17:44.022 { 00:17:44.022 "name": "nvme0", 00:17:44.022 "trtype": "tcp", 00:17:44.022 "traddr": "10.0.0.2", 00:17:44.022 "adrfam": "ipv4", 00:17:44.022 "trsvcid": "4420", 00:17:44.022 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:44.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:44.022 "prchk_reftag": false, 00:17:44.022 "prchk_guard": false, 00:17:44.022 "hdgst": false, 00:17:44.022 "ddgst": false, 00:17:44.022 "dhchap_key": "key0", 00:17:44.022 "dhchap_ctrlr_key": "key1", 00:17:44.022 "allow_unrecognized_csi": false, 00:17:44.022 "method": "bdev_nvme_attach_controller", 00:17:44.022 "req_id": 1 00:17:44.022 } 00:17:44.022 Got JSON-RPC error response 00:17:44.022 response: 00:17:44.022 { 00:17:44.022 "code": -5, 00:17:44.022 "message": "Input/output error" 00:17:44.022 } 00:17:44.022 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:44.022 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:44.022 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:44.022 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:44.022 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:44.022 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:44.022 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:44.281 nvme0n1 00:17:44.281 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:44.281 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:44.281 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.540 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.540 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.540 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.799 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:44.799 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.799 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.799 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.799 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:44.799 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:44.799 08:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:45.368 nvme0n1 00:17:45.368 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:45.368 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:45.368 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.626 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.626 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:45.626 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.626 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.626 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.626 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:45.626 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:45.626 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.883 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.883 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:17:45.883 08:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: --dhchap-ctrl-secret DHHC-1:03:M2ZhNDg2YWJlOTg5NjVmMzU5ZDA0YTlkYTQ5MGYyMTA3NWMyZWYxYTg4MDU2ZTk3NWI5NDNiYTZiZTQwMTliMRlldZQ=: 00:17:46.450 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:46.450 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:46.450 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:46.450 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:46.450 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:46.450 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:46.450 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:46.450 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.450 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.708 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:46.708 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:46.708 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:46.708 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:46.708 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.708 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:46.708 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.708 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:46.708 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:46.708 08:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:46.965 request: 00:17:46.965 { 00:17:46.965 "name": "nvme0", 00:17:46.965 "trtype": "tcp", 00:17:46.965 "traddr": "10.0.0.2", 00:17:46.965 "adrfam": "ipv4", 00:17:46.965 "trsvcid": "4420", 00:17:46.965 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:46.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:46.965 "prchk_reftag": false, 00:17:46.965 "prchk_guard": false, 00:17:46.965 "hdgst": false, 00:17:46.965 "ddgst": false, 00:17:46.965 "dhchap_key": "key1", 00:17:46.965 "allow_unrecognized_csi": false, 00:17:46.965 "method": "bdev_nvme_attach_controller", 00:17:46.965 "req_id": 1 00:17:46.965 } 00:17:46.965 Got JSON-RPC error response 00:17:46.965 response: 00:17:46.965 { 00:17:46.965 "code": -5, 00:17:46.965 "message": "Input/output error" 00:17:46.965 } 00:17:46.965 08:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:46.965 08:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:46.965 08:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:46.965 08:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:46.965 08:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:46.965 08:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:46.965 08:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:47.899 nvme0n1 00:17:47.899 08:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:47.899 08:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:47.899 08:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.158 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.158 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.158 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.158 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.158 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.158 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.158 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.158 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:48.158 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:48.158 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:48.416 nvme0n1 00:17:48.416 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:48.416 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:48.416 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.674 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.674 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.674 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.932 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:48.932 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.932 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.932 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.932 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: '' 2s 00:17:48.932 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:48.932 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:48.932 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: 00:17:48.932 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:48.932 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:48.932 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:48.932 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: ]] 00:17:48.932 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:M2IwZTdjNmQ4ZTFjYjc3MmI5NmMzOTgxZmYwNDA1Yjh5mRUY: 00:17:48.932 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:48.932 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:48.932 08:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: 2s 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: ]] 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MGY5MzU5NmQzNzIxZTIxYTU5ZjYzN2FlZTZhN2I1YWI1ZGYxZDBmMjM0MzM5OWY4rtH5cQ==: 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:50.836 08:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:52.883 08:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:52.883 08:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:52.883 08:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:52.883 08:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:52.883 08:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:52.883 08:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:52.883 08:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:52.883 08:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.142 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:53.142 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.142 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.142 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.142 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:53.142 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:53.142 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:53.708 nvme0n1 00:17:53.708 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:53.708 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.708 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.708 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.708 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:53.708 08:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:54.274 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:54.274 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:54.274 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.533 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.533 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:54.533 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.533 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.533 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.533 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:54.533 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:54.792 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:54.792 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.792 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:54.792 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.792 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:54.792 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.792 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.792 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.792 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:54.792 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:54.792 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:54.792 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:54.792 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.792 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:54.792 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.792 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:54.792 08:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:55.359 request: 00:17:55.359 { 00:17:55.359 "name": "nvme0", 00:17:55.359 "dhchap_key": "key1", 00:17:55.359 "dhchap_ctrlr_key": "key3", 00:17:55.359 "method": "bdev_nvme_set_keys", 00:17:55.359 "req_id": 1 00:17:55.359 } 00:17:55.359 Got JSON-RPC error response 00:17:55.359 response: 00:17:55.359 { 00:17:55.359 "code": -13, 00:17:55.359 "message": "Permission denied" 00:17:55.359 } 00:17:55.359 08:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:55.359 08:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.359 08:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.359 08:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.359 08:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:55.359 08:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:55.359 08:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.617 08:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:55.617 08:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:56.551 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:56.551 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:56.551 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.810 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:56.810 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:56.810 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.810 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.810 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.810 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:56.810 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:56.810 08:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:57.378 nvme0n1 00:17:57.378 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:57.378 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.378 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.637 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.637 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:57.637 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:57.637 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:57.637 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:57.637 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.637 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:57.637 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.637 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:57.637 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:57.896 request: 00:17:57.896 { 00:17:57.896 "name": "nvme0", 00:17:57.896 "dhchap_key": "key2", 00:17:57.896 "dhchap_ctrlr_key": "key0", 00:17:57.896 "method": "bdev_nvme_set_keys", 00:17:57.896 "req_id": 1 00:17:57.896 } 00:17:57.896 Got JSON-RPC error response 00:17:57.896 response: 00:17:57.896 { 00:17:57.896 "code": -13, 00:17:57.896 "message": "Permission denied" 00:17:57.896 } 00:17:57.896 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:57.896 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:57.896 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:57.896 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:57.896 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:57.896 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:57.896 08:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.155 08:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:58.155 08:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:59.090 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:59.090 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:59.090 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.349 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:59.349 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:59.349 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:59.349 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2429729 00:17:59.349 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2429729 ']' 00:17:59.349 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2429729 00:17:59.349 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:59.349 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.349 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2429729 00:17:59.349 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:59.349 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:59.349 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2429729' 00:17:59.349 killing process with pid 2429729 00:17:59.349 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2429729 00:17:59.349 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2429729 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:59.916 rmmod nvme_tcp 00:17:59.916 rmmod nvme_fabrics 00:17:59.916 rmmod nvme_keyring 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2451726 ']' 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2451726 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2451726 ']' 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2451726 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2451726 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2451726' 00:17:59.916 killing process with pid 2451726 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2451726 00:17:59.916 08:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2451726 00:17:59.916 08:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:59.916 08:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:59.916 08:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:59.916 08:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:59.916 08:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:59.916 08:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:59.916 08:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:00.176 08:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.176 08:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:00.176 08:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.176 08:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.176 08:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.083 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:02.083 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.k8O /tmp/spdk.key-sha256.7kQ /tmp/spdk.key-sha384.GDc /tmp/spdk.key-sha512.rgr /tmp/spdk.key-sha512.bGF /tmp/spdk.key-sha384.xAR /tmp/spdk.key-sha256.oxD '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:02.083 00:18:02.083 real 2m31.116s 00:18:02.083 user 5m49.269s 00:18:02.083 sys 0m23.419s 00:18:02.083 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.083 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.083 ************************************ 00:18:02.083 END TEST nvmf_auth_target 00:18:02.083 ************************************ 00:18:02.083 08:00:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:02.083 08:00:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:02.083 08:00:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:02.083 08:00:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.083 08:00:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:02.083 ************************************ 00:18:02.083 START TEST nvmf_bdevio_no_huge 00:18:02.083 ************************************ 00:18:02.083 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:02.343 * Looking for test storage... 00:18:02.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:02.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.343 --rc genhtml_branch_coverage=1 00:18:02.343 --rc genhtml_function_coverage=1 00:18:02.343 --rc genhtml_legend=1 00:18:02.343 --rc geninfo_all_blocks=1 00:18:02.343 --rc geninfo_unexecuted_blocks=1 00:18:02.343 00:18:02.343 ' 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:02.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.343 --rc genhtml_branch_coverage=1 00:18:02.343 --rc genhtml_function_coverage=1 00:18:02.343 --rc genhtml_legend=1 00:18:02.343 --rc geninfo_all_blocks=1 00:18:02.343 --rc geninfo_unexecuted_blocks=1 00:18:02.343 00:18:02.343 ' 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:02.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.343 --rc genhtml_branch_coverage=1 00:18:02.343 --rc genhtml_function_coverage=1 00:18:02.343 --rc genhtml_legend=1 00:18:02.343 --rc geninfo_all_blocks=1 00:18:02.343 --rc geninfo_unexecuted_blocks=1 00:18:02.343 00:18:02.343 ' 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:02.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.343 --rc genhtml_branch_coverage=1 00:18:02.343 --rc genhtml_function_coverage=1 00:18:02.343 --rc genhtml_legend=1 00:18:02.343 --rc geninfo_all_blocks=1 00:18:02.343 --rc geninfo_unexecuted_blocks=1 00:18:02.343 00:18:02.343 ' 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.343 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:02.344 08:00:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.615 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:07.615 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:07.615 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:07.616 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:07.616 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:07.616 Found net devices under 0000:86:00.0: cvl_0_0 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:07.616 Found net devices under 0000:86:00.1: cvl_0_1 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:07.616 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:07.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:18:07.617 00:18:07.617 --- 10.0.0.2 ping statistics --- 00:18:07.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.617 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:18:07.617 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:07.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:18:07.617 00:18:07.617 --- 10.0.0.1 ping statistics --- 00:18:07.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.617 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:18:07.617 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.617 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:07.617 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:07.617 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.617 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:07.617 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:07.617 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.617 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:07.617 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:07.875 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:07.875 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:07.875 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:07.875 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.875 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2458607 00:18:07.875 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2458607 00:18:07.875 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:07.875 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2458607 ']' 00:18:07.875 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.875 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.875 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.875 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.875 08:01:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.875 [2024-11-27 08:01:01.805571] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:18:07.875 [2024-11-27 08:01:01.805615] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:07.875 [2024-11-27 08:01:01.876726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:07.875 [2024-11-27 08:01:01.923833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.875 [2024-11-27 08:01:01.923869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.875 [2024-11-27 08:01:01.923876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.875 [2024-11-27 08:01:01.923882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.875 [2024-11-27 08:01:01.923887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.875 [2024-11-27 08:01:01.925127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:07.875 [2024-11-27 08:01:01.925234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:07.875 [2024-11-27 08:01:01.925340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:07.875 [2024-11-27 08:01:01.925341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.133 [2024-11-27 08:01:02.082462] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.133 Malloc0 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.133 [2024-11-27 08:01:02.126794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:08.133 { 00:18:08.133 "params": { 00:18:08.133 "name": "Nvme$subsystem", 00:18:08.133 "trtype": "$TEST_TRANSPORT", 00:18:08.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.133 "adrfam": "ipv4", 00:18:08.133 "trsvcid": "$NVMF_PORT", 00:18:08.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.133 "hdgst": ${hdgst:-false}, 00:18:08.133 "ddgst": ${ddgst:-false} 00:18:08.133 }, 00:18:08.133 "method": "bdev_nvme_attach_controller" 00:18:08.133 } 00:18:08.133 EOF 00:18:08.133 )") 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:08.133 08:01:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:08.133 "params": { 00:18:08.133 "name": "Nvme1", 00:18:08.133 "trtype": "tcp", 00:18:08.133 "traddr": "10.0.0.2", 00:18:08.133 "adrfam": "ipv4", 00:18:08.133 "trsvcid": "4420", 00:18:08.133 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.133 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:08.133 "hdgst": false, 00:18:08.133 "ddgst": false 00:18:08.133 }, 00:18:08.133 "method": "bdev_nvme_attach_controller" 00:18:08.133 }' 00:18:08.133 [2024-11-27 08:01:02.175702] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:18:08.133 [2024-11-27 08:01:02.175746] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2458631 ] 00:18:08.392 [2024-11-27 08:01:02.242338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:08.392 [2024-11-27 08:01:02.291628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.392 [2024-11-27 08:01:02.291725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.392 [2024-11-27 08:01:02.291726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.649 I/O targets: 00:18:08.649 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:08.649 00:18:08.649 00:18:08.649 CUnit - A unit testing framework for C - Version 2.1-3 00:18:08.649 http://cunit.sourceforge.net/ 00:18:08.649 00:18:08.649 00:18:08.649 Suite: bdevio tests on: Nvme1n1 00:18:08.649 Test: blockdev write read block ...passed 00:18:08.650 Test: blockdev write zeroes read block ...passed 00:18:08.650 Test: blockdev write zeroes read no split ...passed 00:18:08.650 Test: blockdev write zeroes read split ...passed 00:18:08.906 Test: blockdev write zeroes read split partial ...passed 00:18:08.906 Test: blockdev reset ...[2024-11-27 08:01:02.775277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:08.906 [2024-11-27 08:01:02.775344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc68e0 (9): Bad file descriptor 00:18:08.906 [2024-11-27 08:01:02.885257] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:08.906 passed 00:18:08.906 Test: blockdev write read 8 blocks ...passed 00:18:08.906 Test: blockdev write read size > 128k ...passed 00:18:08.906 Test: blockdev write read invalid size ...passed 00:18:08.906 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:08.906 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:08.906 Test: blockdev write read max offset ...passed 00:18:08.906 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:09.163 Test: blockdev writev readv 8 blocks ...passed 00:18:09.163 Test: blockdev writev readv 30 x 1block ...passed 00:18:09.163 Test: blockdev writev readv block ...passed 00:18:09.163 Test: blockdev writev readv size > 128k ...passed 00:18:09.163 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:09.163 Test: blockdev comparev and writev ...[2024-11-27 08:01:03.138801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.163 [2024-11-27 08:01:03.138831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.163 [2024-11-27 08:01:03.138845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.163 [2024-11-27 08:01:03.138853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.163 [2024-11-27 08:01:03.139101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.163 [2024-11-27 08:01:03.139113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:09.164 [2024-11-27 08:01:03.139125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.164 [2024-11-27 08:01:03.139133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:09.164 [2024-11-27 08:01:03.139369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.164 [2024-11-27 08:01:03.139379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:09.164 [2024-11-27 08:01:03.139392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.164 [2024-11-27 08:01:03.139399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:09.164 [2024-11-27 08:01:03.139629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.164 [2024-11-27 08:01:03.139646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:09.164 [2024-11-27 08:01:03.139659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.164 [2024-11-27 08:01:03.139668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:09.164 passed 00:18:09.164 Test: blockdev nvme passthru rw ...passed 00:18:09.164 Test: blockdev nvme passthru vendor specific ...[2024-11-27 08:01:03.221357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:09.164 [2024-11-27 08:01:03.221379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:09.164 [2024-11-27 08:01:03.221504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:09.164 [2024-11-27 08:01:03.221515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:09.164 [2024-11-27 08:01:03.221623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:09.164 [2024-11-27 08:01:03.221633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:09.164 [2024-11-27 08:01:03.221745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:09.164 [2024-11-27 08:01:03.221755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:09.164 passed 00:18:09.164 Test: blockdev nvme admin passthru ...passed 00:18:09.421 Test: blockdev copy ...passed 00:18:09.421 00:18:09.421 Run Summary: Type Total Ran Passed Failed Inactive 00:18:09.421 suites 1 1 n/a 0 0 00:18:09.421 tests 23 23 23 0 0 00:18:09.421 asserts 152 152 152 0 n/a 00:18:09.421 00:18:09.421 Elapsed time = 1.323 seconds 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:09.678 rmmod nvme_tcp 00:18:09.678 rmmod nvme_fabrics 00:18:09.678 rmmod nvme_keyring 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2458607 ']' 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2458607 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2458607 ']' 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2458607 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2458607 00:18:09.678 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:09.679 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:09.679 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2458607' 00:18:09.679 killing process with pid 2458607 00:18:09.679 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2458607 00:18:09.679 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2458607 00:18:09.936 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:09.936 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:09.936 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:09.936 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:09.936 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:09.936 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:09.936 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:09.936 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:09.936 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:09.936 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.936 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.936 08:01:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:12.474 00:18:12.474 real 0m9.874s 00:18:12.474 user 0m12.157s 00:18:12.474 sys 0m4.912s 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:12.474 ************************************ 00:18:12.474 END TEST nvmf_bdevio_no_huge 00:18:12.474 ************************************ 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:12.474 ************************************ 00:18:12.474 START TEST nvmf_tls 00:18:12.474 ************************************ 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:12.474 * Looking for test storage... 00:18:12.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:12.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.474 --rc genhtml_branch_coverage=1 00:18:12.474 --rc genhtml_function_coverage=1 00:18:12.474 --rc genhtml_legend=1 00:18:12.474 --rc geninfo_all_blocks=1 00:18:12.474 --rc geninfo_unexecuted_blocks=1 00:18:12.474 00:18:12.474 ' 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:12.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.474 --rc genhtml_branch_coverage=1 00:18:12.474 --rc genhtml_function_coverage=1 00:18:12.474 --rc genhtml_legend=1 00:18:12.474 --rc geninfo_all_blocks=1 00:18:12.474 --rc geninfo_unexecuted_blocks=1 00:18:12.474 00:18:12.474 ' 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:12.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.474 --rc genhtml_branch_coverage=1 00:18:12.474 --rc genhtml_function_coverage=1 00:18:12.474 --rc genhtml_legend=1 00:18:12.474 --rc geninfo_all_blocks=1 00:18:12.474 --rc geninfo_unexecuted_blocks=1 00:18:12.474 00:18:12.474 ' 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:12.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.474 --rc genhtml_branch_coverage=1 00:18:12.474 --rc genhtml_function_coverage=1 00:18:12.474 --rc genhtml_legend=1 00:18:12.474 --rc geninfo_all_blocks=1 00:18:12.474 --rc geninfo_unexecuted_blocks=1 00:18:12.474 00:18:12.474 ' 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.474 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:12.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:12.475 08:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:17.748 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:17.749 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:17.749 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:17.749 Found net devices under 0000:86:00.0: cvl_0_0 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:17.749 Found net devices under 0000:86:00.1: cvl_0_1 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.749 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:18.008 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:18.008 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:18.008 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:18.008 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:18.008 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:18.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:18:18.008 00:18:18.008 --- 10.0.0.2 ping statistics --- 00:18:18.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.008 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:18:18.008 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:18.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:18:18.008 00:18:18.008 --- 10.0.0.1 ping statistics --- 00:18:18.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.008 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:18:18.008 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.008 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:18.008 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:18.008 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.008 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:18.008 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:18.008 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.008 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:18.008 08:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:18.008 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:18.008 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:18.008 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:18.008 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.008 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2462392 00:18:18.008 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:18.008 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2462392 00:18:18.008 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2462392 ']' 00:18:18.008 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.008 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.008 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.008 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.008 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.008 [2024-11-27 08:01:12.060819] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:18:18.008 [2024-11-27 08:01:12.060870] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.267 [2024-11-27 08:01:12.130442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.267 [2024-11-27 08:01:12.172493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.267 [2024-11-27 08:01:12.172528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.267 [2024-11-27 08:01:12.172536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.267 [2024-11-27 08:01:12.172542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.267 [2024-11-27 08:01:12.172547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.267 [2024-11-27 08:01:12.173115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.267 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.267 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:18.267 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:18.267 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:18.267 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.267 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.267 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:18.267 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:18.525 true 00:18:18.525 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:18.525 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:18.784 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:18.784 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:18.784 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:18.784 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:18.784 08:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:19.043 08:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:19.043 08:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:19.043 08:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:19.301 08:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:19.301 08:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:19.560 08:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:19.560 08:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:19.560 08:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:19.560 08:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:19.560 08:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:19.560 08:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:19.560 08:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:19.818 08:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:19.818 08:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:20.077 08:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:20.077 08:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:20.077 08:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:20.077 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:20.077 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:20.337 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:20.337 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:20.337 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:20.337 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:20.337 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:20.337 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:20.337 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:20.337 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:20.337 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:20.337 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:20.337 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:20.337 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:20.337 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:20.337 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:20.337 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:20.337 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:20.337 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:20.597 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:20.597 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:20.597 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.um2p7SAPNT 00:18:20.597 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:20.597 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.9NEyvKLDis 00:18:20.597 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:20.597 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:20.597 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.um2p7SAPNT 00:18:20.597 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.9NEyvKLDis 00:18:20.597 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:20.597 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:20.856 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.um2p7SAPNT 00:18:20.856 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.um2p7SAPNT 00:18:20.856 08:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:21.114 [2024-11-27 08:01:15.108768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.114 08:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:21.372 08:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:21.373 [2024-11-27 08:01:15.473706] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:21.373 [2024-11-27 08:01:15.473943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.631 08:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:21.631 malloc0 00:18:21.631 08:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:21.889 08:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.um2p7SAPNT 00:18:22.148 08:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:22.148 08:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.um2p7SAPNT 00:18:34.356 Initializing NVMe Controllers 00:18:34.356 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:34.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:34.356 Initialization complete. Launching workers. 00:18:34.356 ======================================================== 00:18:34.356 Latency(us) 00:18:34.356 Device Information : IOPS MiB/s Average min max 00:18:34.356 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16354.08 63.88 3913.55 806.43 4700.65 00:18:34.356 ======================================================== 00:18:34.356 Total : 16354.08 63.88 3913.55 806.43 4700.65 00:18:34.356 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.um2p7SAPNT 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.um2p7SAPNT 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2464740 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2464740 /var/tmp/bdevperf.sock 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2464740 ']' 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.356 [2024-11-27 08:01:26.413107] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:18:34.356 [2024-11-27 08:01:26.413157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2464740 ] 00:18:34.356 [2024-11-27 08:01:26.472584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.356 [2024-11-27 08:01:26.515713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.um2p7SAPNT 00:18:34.356 08:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:34.356 [2024-11-27 08:01:26.971714] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:34.356 TLSTESTn1 00:18:34.356 08:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:34.356 Running I/O for 10 seconds... 00:18:35.300 5318.00 IOPS, 20.77 MiB/s [2024-11-27T07:01:30.348Z] 5423.00 IOPS, 21.18 MiB/s [2024-11-27T07:01:31.281Z] 5425.00 IOPS, 21.19 MiB/s [2024-11-27T07:01:32.218Z] 5417.25 IOPS, 21.16 MiB/s [2024-11-27T07:01:33.595Z] 5430.40 IOPS, 21.21 MiB/s [2024-11-27T07:01:34.529Z] 5412.00 IOPS, 21.14 MiB/s [2024-11-27T07:01:35.464Z] 5410.71 IOPS, 21.14 MiB/s [2024-11-27T07:01:36.398Z] 5427.25 IOPS, 21.20 MiB/s [2024-11-27T07:01:37.335Z] 5427.56 IOPS, 21.20 MiB/s [2024-11-27T07:01:37.335Z] 5411.70 IOPS, 21.14 MiB/s 00:18:43.226 Latency(us) 00:18:43.226 [2024-11-27T07:01:37.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.226 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:43.226 Verification LBA range: start 0x0 length 0x2000 00:18:43.226 TLSTESTn1 : 10.02 5414.83 21.15 0.00 0.00 23599.70 5784.26 25188.62 00:18:43.226 [2024-11-27T07:01:37.335Z] =================================================================================================================== 00:18:43.226 [2024-11-27T07:01:37.335Z] Total : 5414.83 21.15 0.00 0.00 23599.70 5784.26 25188.62 00:18:43.226 { 00:18:43.226 "results": [ 00:18:43.226 { 00:18:43.226 "job": "TLSTESTn1", 00:18:43.226 "core_mask": "0x4", 00:18:43.226 "workload": "verify", 00:18:43.226 "status": "finished", 00:18:43.226 "verify_range": { 00:18:43.226 "start": 0, 00:18:43.226 "length": 8192 00:18:43.226 }, 00:18:43.226 "queue_depth": 128, 00:18:43.226 "io_size": 4096, 00:18:43.226 "runtime": 10.017489, 00:18:43.226 "iops": 5414.8300038063435, 00:18:43.226 "mibps": 21.15167970236853, 00:18:43.226 "io_failed": 0, 00:18:43.226 "io_timeout": 0, 00:18:43.226 "avg_latency_us": 23599.701715613075, 00:18:43.226 "min_latency_us": 5784.264347826087, 00:18:43.226 "max_latency_us": 25188.61913043478 00:18:43.226 } 00:18:43.226 ], 00:18:43.226 "core_count": 1 00:18:43.226 } 00:18:43.226 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:43.226 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2464740 00:18:43.226 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2464740 ']' 00:18:43.226 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2464740 00:18:43.226 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:43.226 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.226 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2464740 00:18:43.226 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:43.226 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:43.226 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2464740' 00:18:43.226 killing process with pid 2464740 00:18:43.226 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2464740 00:18:43.226 Received shutdown signal, test time was about 10.000000 seconds 00:18:43.226 00:18:43.226 Latency(us) 00:18:43.226 [2024-11-27T07:01:37.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.226 [2024-11-27T07:01:37.335Z] =================================================================================================================== 00:18:43.226 [2024-11-27T07:01:37.335Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.226 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2464740 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9NEyvKLDis 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9NEyvKLDis 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9NEyvKLDis 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9NEyvKLDis 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2466578 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2466578 /var/tmp/bdevperf.sock 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2466578 ']' 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.485 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.485 [2024-11-27 08:01:37.474851] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:18:43.485 [2024-11-27 08:01:37.474903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466578 ] 00:18:43.485 [2024-11-27 08:01:37.533471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.485 [2024-11-27 08:01:37.570557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.744 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.744 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:43.744 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9NEyvKLDis 00:18:44.002 08:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:44.002 [2024-11-27 08:01:38.038299] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:44.002 [2024-11-27 08:01:38.046810] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:44.002 [2024-11-27 08:01:38.047665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c841a0 (107): Transport endpoint is not connected 00:18:44.002 [2024-11-27 08:01:38.048657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c841a0 (9): Bad file descriptor 00:18:44.002 [2024-11-27 08:01:38.049659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:44.002 [2024-11-27 08:01:38.049670] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:44.002 [2024-11-27 08:01:38.049678] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:44.002 [2024-11-27 08:01:38.049686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:44.002 request: 00:18:44.002 { 00:18:44.002 "name": "TLSTEST", 00:18:44.002 "trtype": "tcp", 00:18:44.002 "traddr": "10.0.0.2", 00:18:44.002 "adrfam": "ipv4", 00:18:44.002 "trsvcid": "4420", 00:18:44.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.002 "prchk_reftag": false, 00:18:44.002 "prchk_guard": false, 00:18:44.002 "hdgst": false, 00:18:44.002 "ddgst": false, 00:18:44.002 "psk": "key0", 00:18:44.002 "allow_unrecognized_csi": false, 00:18:44.002 "method": "bdev_nvme_attach_controller", 00:18:44.002 "req_id": 1 00:18:44.002 } 00:18:44.002 Got JSON-RPC error response 00:18:44.002 response: 00:18:44.002 { 00:18:44.002 "code": -5, 00:18:44.002 "message": "Input/output error" 00:18:44.002 } 00:18:44.003 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2466578 00:18:44.003 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2466578 ']' 00:18:44.003 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2466578 00:18:44.003 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:44.003 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.003 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2466578 00:18:44.261 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:44.261 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:44.261 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2466578' 00:18:44.261 killing process with pid 2466578 00:18:44.261 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2466578 00:18:44.261 Received shutdown signal, test time was about 10.000000 seconds 00:18:44.261 00:18:44.261 Latency(us) 00:18:44.261 [2024-11-27T07:01:38.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.261 [2024-11-27T07:01:38.370Z] =================================================================================================================== 00:18:44.261 [2024-11-27T07:01:38.370Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:44.261 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2466578 00:18:44.261 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:44.261 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:44.261 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:44.261 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:44.261 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:44.261 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.um2p7SAPNT 00:18:44.261 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:44.261 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.um2p7SAPNT 00:18:44.261 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.um2p7SAPNT 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.um2p7SAPNT 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2466807 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2466807 /var/tmp/bdevperf.sock 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2466807 ']' 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.262 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.262 [2024-11-27 08:01:38.326921] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:18:44.262 [2024-11-27 08:01:38.326981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466807 ] 00:18:44.520 [2024-11-27 08:01:38.386746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.520 [2024-11-27 08:01:38.427983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.520 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.520 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:44.520 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.um2p7SAPNT 00:18:44.779 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:44.779 [2024-11-27 08:01:38.879946] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:45.038 [2024-11-27 08:01:38.888888] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:45.038 [2024-11-27 08:01:38.888912] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:45.038 [2024-11-27 08:01:38.888936] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:45.038 [2024-11-27 08:01:38.889399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246b1a0 (107): Transport endpoint is not connected 00:18:45.038 [2024-11-27 08:01:38.890392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246b1a0 (9): Bad file descriptor 00:18:45.038 [2024-11-27 08:01:38.891394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:45.038 [2024-11-27 08:01:38.891409] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:45.038 [2024-11-27 08:01:38.891417] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:45.038 [2024-11-27 08:01:38.891426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:45.038 request: 00:18:45.038 { 00:18:45.038 "name": "TLSTEST", 00:18:45.038 "trtype": "tcp", 00:18:45.038 "traddr": "10.0.0.2", 00:18:45.038 "adrfam": "ipv4", 00:18:45.038 "trsvcid": "4420", 00:18:45.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.038 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:45.038 "prchk_reftag": false, 00:18:45.038 "prchk_guard": false, 00:18:45.038 "hdgst": false, 00:18:45.038 "ddgst": false, 00:18:45.038 "psk": "key0", 00:18:45.038 "allow_unrecognized_csi": false, 00:18:45.038 "method": "bdev_nvme_attach_controller", 00:18:45.038 "req_id": 1 00:18:45.038 } 00:18:45.038 Got JSON-RPC error response 00:18:45.038 response: 00:18:45.038 { 00:18:45.038 "code": -5, 00:18:45.038 "message": "Input/output error" 00:18:45.038 } 00:18:45.038 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2466807 00:18:45.038 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2466807 ']' 00:18:45.038 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2466807 00:18:45.038 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:45.038 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.038 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2466807 00:18:45.038 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:45.038 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:45.038 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2466807' 00:18:45.038 killing process with pid 2466807 00:18:45.038 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2466807 00:18:45.038 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.038 00:18:45.038 Latency(us) 00:18:45.038 [2024-11-27T07:01:39.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.038 [2024-11-27T07:01:39.147Z] =================================================================================================================== 00:18:45.038 [2024-11-27T07:01:39.147Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:45.038 08:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2466807 00:18:45.038 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:45.038 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:45.038 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:45.038 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:45.038 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:45.038 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.um2p7SAPNT 00:18:45.038 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:45.038 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.um2p7SAPNT 00:18:45.038 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:45.038 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.038 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:45.038 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.038 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.um2p7SAPNT 00:18:45.038 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:45.039 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:45.039 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:45.039 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.um2p7SAPNT 00:18:45.039 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:45.039 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2466824 00:18:45.039 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:45.039 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:45.039 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2466824 /var/tmp/bdevperf.sock 00:18:45.039 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2466824 ']' 00:18:45.039 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.039 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.039 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.039 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.039 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.305 [2024-11-27 08:01:39.166646] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:18:45.305 [2024-11-27 08:01:39.166697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466824 ] 00:18:45.305 [2024-11-27 08:01:39.226981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.305 [2024-11-27 08:01:39.269256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.305 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.305 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:45.305 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.um2p7SAPNT 00:18:45.564 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:45.823 [2024-11-27 08:01:39.729648] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:45.823 [2024-11-27 08:01:39.741062] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:45.823 [2024-11-27 08:01:39.741084] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:45.823 [2024-11-27 08:01:39.741107] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:45.823 [2024-11-27 08:01:39.742112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9eb1a0 (107): Transport endpoint is not connected 00:18:45.823 [2024-11-27 08:01:39.743107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9eb1a0 (9): Bad file descriptor 00:18:45.823 [2024-11-27 08:01:39.744110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:45.823 [2024-11-27 08:01:39.744121] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:45.823 [2024-11-27 08:01:39.744129] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:45.823 [2024-11-27 08:01:39.744138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:45.823 request: 00:18:45.823 { 00:18:45.823 "name": "TLSTEST", 00:18:45.823 "trtype": "tcp", 00:18:45.823 "traddr": "10.0.0.2", 00:18:45.823 "adrfam": "ipv4", 00:18:45.823 "trsvcid": "4420", 00:18:45.823 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:45.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.823 "prchk_reftag": false, 00:18:45.823 "prchk_guard": false, 00:18:45.823 "hdgst": false, 00:18:45.823 "ddgst": false, 00:18:45.823 "psk": "key0", 00:18:45.823 "allow_unrecognized_csi": false, 00:18:45.823 "method": "bdev_nvme_attach_controller", 00:18:45.823 "req_id": 1 00:18:45.823 } 00:18:45.823 Got JSON-RPC error response 00:18:45.823 response: 00:18:45.823 { 00:18:45.823 "code": -5, 00:18:45.823 "message": "Input/output error" 00:18:45.823 } 00:18:45.823 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2466824 00:18:45.823 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2466824 ']' 00:18:45.823 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2466824 00:18:45.823 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:45.823 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.823 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2466824 00:18:45.823 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:45.823 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:45.823 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2466824' 00:18:45.823 killing process with pid 2466824 00:18:45.823 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2466824 00:18:45.823 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.823 00:18:45.823 Latency(us) 00:18:45.823 [2024-11-27T07:01:39.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.823 [2024-11-27T07:01:39.932Z] =================================================================================================================== 00:18:45.823 [2024-11-27T07:01:39.932Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:45.823 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2466824 00:18:46.082 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:46.082 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:46.082 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:46.082 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2467057 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2467057 /var/tmp/bdevperf.sock 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2467057 ']' 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.083 08:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.083 [2024-11-27 08:01:40.018573] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:18:46.083 [2024-11-27 08:01:40.018627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2467057 ] 00:18:46.083 [2024-11-27 08:01:40.079950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.083 [2024-11-27 08:01:40.120548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.342 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.342 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:46.342 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:46.342 [2024-11-27 08:01:40.388639] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:46.342 [2024-11-27 08:01:40.388673] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:46.342 request: 00:18:46.342 { 00:18:46.342 "name": "key0", 00:18:46.342 "path": "", 00:18:46.342 "method": "keyring_file_add_key", 00:18:46.342 "req_id": 1 00:18:46.342 } 00:18:46.342 Got JSON-RPC error response 00:18:46.342 response: 00:18:46.342 { 00:18:46.342 "code": -1, 00:18:46.342 "message": "Operation not permitted" 00:18:46.342 } 00:18:46.342 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:46.601 [2024-11-27 08:01:40.577223] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:46.601 [2024-11-27 08:01:40.577254] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:46.601 request: 00:18:46.601 { 00:18:46.601 "name": "TLSTEST", 00:18:46.601 "trtype": "tcp", 00:18:46.601 "traddr": "10.0.0.2", 00:18:46.601 "adrfam": "ipv4", 00:18:46.601 "trsvcid": "4420", 00:18:46.601 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.601 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.601 "prchk_reftag": false, 00:18:46.601 "prchk_guard": false, 00:18:46.601 "hdgst": false, 00:18:46.601 "ddgst": false, 00:18:46.601 "psk": "key0", 00:18:46.601 "allow_unrecognized_csi": false, 00:18:46.601 "method": "bdev_nvme_attach_controller", 00:18:46.601 "req_id": 1 00:18:46.601 } 00:18:46.601 Got JSON-RPC error response 00:18:46.601 response: 00:18:46.601 { 00:18:46.601 "code": -126, 00:18:46.601 "message": "Required key not available" 00:18:46.601 } 00:18:46.601 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2467057 00:18:46.601 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2467057 ']' 00:18:46.601 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2467057 00:18:46.601 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:46.601 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.601 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2467057 00:18:46.601 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:46.601 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:46.601 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2467057' 00:18:46.601 killing process with pid 2467057 00:18:46.601 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2467057 00:18:46.601 Received shutdown signal, test time was about 10.000000 seconds 00:18:46.601 00:18:46.601 Latency(us) 00:18:46.601 [2024-11-27T07:01:40.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.601 [2024-11-27T07:01:40.710Z] =================================================================================================================== 00:18:46.601 [2024-11-27T07:01:40.710Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:46.601 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2467057 00:18:46.860 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:46.860 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:46.860 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:46.860 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:46.860 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:46.860 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2462392 00:18:46.860 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2462392 ']' 00:18:46.860 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2462392 00:18:46.860 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:46.860 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.860 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2462392 00:18:46.860 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:46.860 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:46.860 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2462392' 00:18:46.860 killing process with pid 2462392 00:18:46.860 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2462392 00:18:46.860 08:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2462392 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.XCh5ManDQo 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.XCh5ManDQo 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2467302 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2467302 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2467302 ']' 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.118 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.118 [2024-11-27 08:01:41.126873] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:18:47.118 [2024-11-27 08:01:41.126921] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.118 [2024-11-27 08:01:41.190891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.449 [2024-11-27 08:01:41.232499] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.449 [2024-11-27 08:01:41.232532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.449 [2024-11-27 08:01:41.232539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.449 [2024-11-27 08:01:41.232545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.449 [2024-11-27 08:01:41.232550] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.449 [2024-11-27 08:01:41.233136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.449 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.449 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:47.449 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:47.449 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:47.449 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.449 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.449 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.XCh5ManDQo 00:18:47.449 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.XCh5ManDQo 00:18:47.449 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:47.759 [2024-11-27 08:01:41.529535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.759 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:47.759 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:48.062 [2024-11-27 08:01:41.902495] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:48.062 [2024-11-27 08:01:41.902704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.062 08:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:48.062 malloc0 00:18:48.062 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:48.321 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.XCh5ManDQo 00:18:48.579 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XCh5ManDQo 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.XCh5ManDQo 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2467571 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2467571 /var/tmp/bdevperf.sock 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2467571 ']' 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.838 [2024-11-27 08:01:42.740672] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:18:48.838 [2024-11-27 08:01:42.740720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2467571 ] 00:18:48.838 [2024-11-27 08:01:42.799308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.838 [2024-11-27 08:01:42.842033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:48.838 08:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XCh5ManDQo 00:18:49.097 08:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:49.356 [2024-11-27 08:01:43.286027] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:49.356 TLSTESTn1 00:18:49.356 08:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:49.356 Running I/O for 10 seconds... 00:18:51.666 5526.00 IOPS, 21.59 MiB/s [2024-11-27T07:01:46.711Z] 5620.00 IOPS, 21.95 MiB/s [2024-11-27T07:01:47.646Z] 5620.00 IOPS, 21.95 MiB/s [2024-11-27T07:01:48.582Z] 5571.75 IOPS, 21.76 MiB/s [2024-11-27T07:01:49.517Z] 5555.80 IOPS, 21.70 MiB/s [2024-11-27T07:01:50.891Z] 5564.17 IOPS, 21.74 MiB/s [2024-11-27T07:01:51.828Z] 5548.86 IOPS, 21.68 MiB/s [2024-11-27T07:01:52.763Z] 5525.62 IOPS, 21.58 MiB/s [2024-11-27T07:01:53.699Z] 5524.89 IOPS, 21.58 MiB/s [2024-11-27T07:01:53.699Z] 5526.90 IOPS, 21.59 MiB/s 00:18:59.590 Latency(us) 00:18:59.590 [2024-11-27T07:01:53.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.590 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:59.590 Verification LBA range: start 0x0 length 0x2000 00:18:59.590 TLSTESTn1 : 10.02 5527.64 21.59 0.00 0.00 23116.51 5983.72 25302.59 00:18:59.590 [2024-11-27T07:01:53.699Z] =================================================================================================================== 00:18:59.590 [2024-11-27T07:01:53.699Z] Total : 5527.64 21.59 0.00 0.00 23116.51 5983.72 25302.59 00:18:59.590 { 00:18:59.590 "results": [ 00:18:59.590 { 00:18:59.590 "job": "TLSTESTn1", 00:18:59.590 "core_mask": "0x4", 00:18:59.590 "workload": "verify", 00:18:59.590 "status": "finished", 00:18:59.590 "verify_range": { 00:18:59.590 "start": 0, 00:18:59.590 "length": 8192 00:18:59.590 }, 00:18:59.590 "queue_depth": 128, 00:18:59.590 "io_size": 4096, 00:18:59.590 "runtime": 10.02127, 00:18:59.590 "iops": 5527.642703968659, 00:18:59.590 "mibps": 21.592354312377573, 00:18:59.590 "io_failed": 0, 00:18:59.590 "io_timeout": 0, 00:18:59.590 "avg_latency_us": 23116.510231746965, 00:18:59.590 "min_latency_us": 5983.721739130435, 00:18:59.590 "max_latency_us": 25302.594782608696 00:18:59.590 } 00:18:59.590 ], 00:18:59.590 "core_count": 1 00:18:59.590 } 00:18:59.590 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:59.590 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2467571 00:18:59.590 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2467571 ']' 00:18:59.590 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2467571 00:18:59.590 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:59.590 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.590 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2467571 00:18:59.590 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:59.590 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:59.590 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2467571' 00:18:59.590 killing process with pid 2467571 00:18:59.590 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2467571 00:18:59.590 Received shutdown signal, test time was about 10.000000 seconds 00:18:59.590 00:18:59.590 Latency(us) 00:18:59.590 [2024-11-27T07:01:53.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.590 [2024-11-27T07:01:53.699Z] =================================================================================================================== 00:18:59.590 [2024-11-27T07:01:53.699Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:59.590 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2467571 00:18:59.849 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.XCh5ManDQo 00:18:59.849 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XCh5ManDQo 00:18:59.849 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:59.849 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XCh5ManDQo 00:18:59.849 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:59.849 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:59.849 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:59.849 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:59.849 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XCh5ManDQo 00:18:59.849 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:59.849 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:59.849 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:59.849 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.XCh5ManDQo 00:18:59.849 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.849 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2469403 00:18:59.849 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:59.850 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:59.850 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2469403 /var/tmp/bdevperf.sock 00:18:59.850 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2469403 ']' 00:18:59.850 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.850 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.850 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.850 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.850 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.850 [2024-11-27 08:01:53.796361] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:18:59.850 [2024-11-27 08:01:53.796411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469403 ] 00:18:59.850 [2024-11-27 08:01:53.854338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.850 [2024-11-27 08:01:53.897289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.108 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.108 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:00.108 08:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XCh5ManDQo 00:19:00.108 [2024-11-27 08:01:54.154244] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.XCh5ManDQo': 0100666 00:19:00.108 [2024-11-27 08:01:54.154270] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:00.108 request: 00:19:00.108 { 00:19:00.108 "name": "key0", 00:19:00.108 "path": "/tmp/tmp.XCh5ManDQo", 00:19:00.108 "method": "keyring_file_add_key", 00:19:00.109 "req_id": 1 00:19:00.109 } 00:19:00.109 Got JSON-RPC error response 00:19:00.109 response: 00:19:00.109 { 00:19:00.109 "code": -1, 00:19:00.109 "message": "Operation not permitted" 00:19:00.109 } 00:19:00.109 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:00.368 [2024-11-27 08:01:54.350848] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:00.368 [2024-11-27 08:01:54.350874] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:00.368 request: 00:19:00.368 { 00:19:00.368 "name": "TLSTEST", 00:19:00.368 "trtype": "tcp", 00:19:00.368 "traddr": "10.0.0.2", 00:19:00.368 "adrfam": "ipv4", 00:19:00.368 "trsvcid": "4420", 00:19:00.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:00.368 "prchk_reftag": false, 00:19:00.368 "prchk_guard": false, 00:19:00.368 "hdgst": false, 00:19:00.368 "ddgst": false, 00:19:00.368 "psk": "key0", 00:19:00.368 "allow_unrecognized_csi": false, 00:19:00.368 "method": "bdev_nvme_attach_controller", 00:19:00.368 "req_id": 1 00:19:00.368 } 00:19:00.368 Got JSON-RPC error response 00:19:00.368 response: 00:19:00.368 { 00:19:00.368 "code": -126, 00:19:00.368 "message": "Required key not available" 00:19:00.368 } 00:19:00.368 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2469403 00:19:00.368 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2469403 ']' 00:19:00.368 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2469403 00:19:00.368 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:00.368 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.368 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2469403 00:19:00.368 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:00.368 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:00.368 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2469403' 00:19:00.368 killing process with pid 2469403 00:19:00.368 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2469403 00:19:00.368 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.368 00:19:00.368 Latency(us) 00:19:00.368 [2024-11-27T07:01:54.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.368 [2024-11-27T07:01:54.477Z] =================================================================================================================== 00:19:00.368 [2024-11-27T07:01:54.477Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:00.368 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2469403 00:19:00.628 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:00.628 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:00.628 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:00.628 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:00.628 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:00.628 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2467302 00:19:00.628 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2467302 ']' 00:19:00.628 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2467302 00:19:00.628 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:00.628 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.628 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2467302 00:19:00.628 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:00.628 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:00.628 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2467302' 00:19:00.628 killing process with pid 2467302 00:19:00.628 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2467302 00:19:00.628 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2467302 00:19:00.887 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:00.887 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:00.887 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:00.887 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.887 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2469491 00:19:00.887 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:00.887 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2469491 00:19:00.887 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2469491 ']' 00:19:00.887 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.887 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.887 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.887 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.887 08:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.887 [2024-11-27 08:01:54.867141] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:19:00.887 [2024-11-27 08:01:54.867189] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.887 [2024-11-27 08:01:54.932962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.887 [2024-11-27 08:01:54.973649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.887 [2024-11-27 08:01:54.973685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.887 [2024-11-27 08:01:54.973692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.887 [2024-11-27 08:01:54.973699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.887 [2024-11-27 08:01:54.973704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.887 [2024-11-27 08:01:54.974270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.145 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.145 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:01.145 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:01.145 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:01.146 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.146 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.146 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.XCh5ManDQo 00:19:01.146 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:01.146 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.XCh5ManDQo 00:19:01.146 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:01.146 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.146 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:01.146 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.146 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.XCh5ManDQo 00:19:01.146 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.XCh5ManDQo 00:19:01.146 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:01.404 [2024-11-27 08:01:55.282873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.404 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:01.404 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:01.663 [2024-11-27 08:01:55.659850] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:01.663 [2024-11-27 08:01:55.660076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.663 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:01.922 malloc0 00:19:01.922 08:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:02.181 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.XCh5ManDQo 00:19:02.181 [2024-11-27 08:01:56.213376] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.XCh5ManDQo': 0100666 00:19:02.181 [2024-11-27 08:01:56.213404] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:02.181 request: 00:19:02.181 { 00:19:02.181 "name": "key0", 00:19:02.181 "path": "/tmp/tmp.XCh5ManDQo", 00:19:02.181 "method": "keyring_file_add_key", 00:19:02.181 "req_id": 1 00:19:02.181 } 00:19:02.181 Got JSON-RPC error response 00:19:02.181 response: 00:19:02.181 { 00:19:02.181 "code": -1, 00:19:02.181 "message": "Operation not permitted" 00:19:02.181 } 00:19:02.181 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:02.440 [2024-11-27 08:01:56.389852] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:02.440 [2024-11-27 08:01:56.389884] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:02.440 request: 00:19:02.440 { 00:19:02.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.440 "host": "nqn.2016-06.io.spdk:host1", 00:19:02.440 "psk": "key0", 00:19:02.440 "method": "nvmf_subsystem_add_host", 00:19:02.440 "req_id": 1 00:19:02.440 } 00:19:02.440 Got JSON-RPC error response 00:19:02.440 response: 00:19:02.440 { 00:19:02.440 "code": -32603, 00:19:02.440 "message": "Internal error" 00:19:02.440 } 00:19:02.440 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:02.440 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:02.440 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:02.440 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:02.440 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2469491 00:19:02.440 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2469491 ']' 00:19:02.440 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2469491 00:19:02.440 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:02.440 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.440 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2469491 00:19:02.440 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:02.440 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:02.440 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2469491' 00:19:02.440 killing process with pid 2469491 00:19:02.440 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2469491 00:19:02.440 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2469491 00:19:02.700 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.XCh5ManDQo 00:19:02.700 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:02.700 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:02.700 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:02.700 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.700 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2469904 00:19:02.700 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:02.700 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2469904 00:19:02.700 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2469904 ']' 00:19:02.700 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.700 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.700 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.700 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.700 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.700 [2024-11-27 08:01:56.685876] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:19:02.700 [2024-11-27 08:01:56.685923] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.700 [2024-11-27 08:01:56.750806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.700 [2024-11-27 08:01:56.791848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.700 [2024-11-27 08:01:56.791883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.700 [2024-11-27 08:01:56.791890] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.700 [2024-11-27 08:01:56.791897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.700 [2024-11-27 08:01:56.791902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.700 [2024-11-27 08:01:56.792461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.959 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.959 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:02.959 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:02.959 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:02.959 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.959 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.959 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.XCh5ManDQo 00:19:02.959 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.XCh5ManDQo 00:19:02.959 08:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:03.217 [2024-11-27 08:01:57.096362] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.217 08:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:03.217 08:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:03.476 [2024-11-27 08:01:57.461295] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:03.476 [2024-11-27 08:01:57.461508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.476 08:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:03.734 malloc0 00:19:03.734 08:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:03.734 08:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.XCh5ManDQo 00:19:03.992 08:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:04.251 08:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:04.251 08:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2470164 00:19:04.251 08:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:04.251 08:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2470164 /var/tmp/bdevperf.sock 00:19:04.251 08:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2470164 ']' 00:19:04.251 08:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.251 08:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.251 08:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.251 08:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.251 08:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.251 [2024-11-27 08:01:58.240835] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:19:04.251 [2024-11-27 08:01:58.240887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2470164 ] 00:19:04.251 [2024-11-27 08:01:58.298743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.251 [2024-11-27 08:01:58.342001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.510 08:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.510 08:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:04.510 08:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XCh5ManDQo 00:19:04.769 08:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:04.770 [2024-11-27 08:01:58.783195] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:04.770 TLSTESTn1 00:19:04.770 08:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:05.337 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:05.337 "subsystems": [ 00:19:05.337 { 00:19:05.337 "subsystem": "keyring", 00:19:05.337 "config": [ 00:19:05.337 { 00:19:05.337 "method": "keyring_file_add_key", 00:19:05.337 "params": { 00:19:05.337 "name": "key0", 00:19:05.337 "path": "/tmp/tmp.XCh5ManDQo" 00:19:05.337 } 00:19:05.337 } 00:19:05.337 ] 00:19:05.337 }, 00:19:05.337 { 00:19:05.337 "subsystem": "iobuf", 00:19:05.337 "config": [ 00:19:05.337 { 00:19:05.337 "method": "iobuf_set_options", 00:19:05.337 "params": { 00:19:05.337 "small_pool_count": 8192, 00:19:05.337 "large_pool_count": 1024, 00:19:05.337 "small_bufsize": 8192, 00:19:05.337 "large_bufsize": 135168, 00:19:05.337 "enable_numa": false 00:19:05.337 } 00:19:05.337 } 00:19:05.337 ] 00:19:05.337 }, 00:19:05.337 { 00:19:05.337 "subsystem": "sock", 00:19:05.337 "config": [ 00:19:05.337 { 00:19:05.337 "method": "sock_set_default_impl", 00:19:05.337 "params": { 00:19:05.337 "impl_name": "posix" 00:19:05.337 } 00:19:05.337 }, 00:19:05.337 { 00:19:05.337 "method": "sock_impl_set_options", 00:19:05.337 "params": { 00:19:05.337 "impl_name": "ssl", 00:19:05.337 "recv_buf_size": 4096, 00:19:05.337 "send_buf_size": 4096, 00:19:05.337 "enable_recv_pipe": true, 00:19:05.337 "enable_quickack": false, 00:19:05.337 "enable_placement_id": 0, 00:19:05.337 "enable_zerocopy_send_server": true, 00:19:05.337 "enable_zerocopy_send_client": false, 00:19:05.337 "zerocopy_threshold": 0, 00:19:05.337 "tls_version": 0, 00:19:05.337 "enable_ktls": false 00:19:05.337 } 00:19:05.337 }, 00:19:05.337 { 00:19:05.337 "method": "sock_impl_set_options", 00:19:05.337 "params": { 00:19:05.337 "impl_name": "posix", 00:19:05.337 "recv_buf_size": 2097152, 00:19:05.337 "send_buf_size": 2097152, 00:19:05.337 "enable_recv_pipe": true, 00:19:05.337 "enable_quickack": false, 00:19:05.337 "enable_placement_id": 0, 00:19:05.337 "enable_zerocopy_send_server": true, 00:19:05.337 "enable_zerocopy_send_client": false, 00:19:05.337 "zerocopy_threshold": 0, 00:19:05.337 "tls_version": 0, 00:19:05.337 "enable_ktls": false 00:19:05.337 } 00:19:05.337 } 00:19:05.337 ] 00:19:05.337 }, 00:19:05.337 { 00:19:05.337 "subsystem": "vmd", 00:19:05.337 "config": [] 00:19:05.337 }, 00:19:05.337 { 00:19:05.337 "subsystem": "accel", 00:19:05.337 "config": [ 00:19:05.337 { 00:19:05.337 "method": "accel_set_options", 00:19:05.337 "params": { 00:19:05.337 "small_cache_size": 128, 00:19:05.337 "large_cache_size": 16, 00:19:05.337 "task_count": 2048, 00:19:05.337 "sequence_count": 2048, 00:19:05.337 "buf_count": 2048 00:19:05.337 } 00:19:05.337 } 00:19:05.337 ] 00:19:05.337 }, 00:19:05.337 { 00:19:05.337 "subsystem": "bdev", 00:19:05.337 "config": [ 00:19:05.337 { 00:19:05.337 "method": "bdev_set_options", 00:19:05.337 "params": { 00:19:05.337 "bdev_io_pool_size": 65535, 00:19:05.337 "bdev_io_cache_size": 256, 00:19:05.337 "bdev_auto_examine": true, 00:19:05.337 "iobuf_small_cache_size": 128, 00:19:05.337 "iobuf_large_cache_size": 16 00:19:05.337 } 00:19:05.337 }, 00:19:05.337 { 00:19:05.337 "method": "bdev_raid_set_options", 00:19:05.337 "params": { 00:19:05.337 "process_window_size_kb": 1024, 00:19:05.337 "process_max_bandwidth_mb_sec": 0 00:19:05.337 } 00:19:05.337 }, 00:19:05.337 { 00:19:05.337 "method": "bdev_iscsi_set_options", 00:19:05.337 "params": { 00:19:05.337 "timeout_sec": 30 00:19:05.337 } 00:19:05.337 }, 00:19:05.337 { 00:19:05.337 "method": "bdev_nvme_set_options", 00:19:05.337 "params": { 00:19:05.337 "action_on_timeout": "none", 00:19:05.337 "timeout_us": 0, 00:19:05.337 "timeout_admin_us": 0, 00:19:05.337 "keep_alive_timeout_ms": 10000, 00:19:05.337 "arbitration_burst": 0, 00:19:05.337 "low_priority_weight": 0, 00:19:05.337 "medium_priority_weight": 0, 00:19:05.337 "high_priority_weight": 0, 00:19:05.337 "nvme_adminq_poll_period_us": 10000, 00:19:05.337 "nvme_ioq_poll_period_us": 0, 00:19:05.337 "io_queue_requests": 0, 00:19:05.337 "delay_cmd_submit": true, 00:19:05.337 "transport_retry_count": 4, 00:19:05.337 "bdev_retry_count": 3, 00:19:05.337 "transport_ack_timeout": 0, 00:19:05.337 "ctrlr_loss_timeout_sec": 0, 00:19:05.337 "reconnect_delay_sec": 0, 00:19:05.337 "fast_io_fail_timeout_sec": 0, 00:19:05.337 "disable_auto_failback": false, 00:19:05.337 "generate_uuids": false, 00:19:05.337 "transport_tos": 0, 00:19:05.337 "nvme_error_stat": false, 00:19:05.337 "rdma_srq_size": 0, 00:19:05.337 "io_path_stat": false, 00:19:05.337 "allow_accel_sequence": false, 00:19:05.337 "rdma_max_cq_size": 0, 00:19:05.337 "rdma_cm_event_timeout_ms": 0, 00:19:05.338 "dhchap_digests": [ 00:19:05.338 "sha256", 00:19:05.338 "sha384", 00:19:05.338 "sha512" 00:19:05.338 ], 00:19:05.338 "dhchap_dhgroups": [ 00:19:05.338 "null", 00:19:05.338 "ffdhe2048", 00:19:05.338 "ffdhe3072", 00:19:05.338 "ffdhe4096", 00:19:05.338 "ffdhe6144", 00:19:05.338 "ffdhe8192" 00:19:05.338 ] 00:19:05.338 } 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "method": "bdev_nvme_set_hotplug", 00:19:05.338 "params": { 00:19:05.338 "period_us": 100000, 00:19:05.338 "enable": false 00:19:05.338 } 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "method": "bdev_malloc_create", 00:19:05.338 "params": { 00:19:05.338 "name": "malloc0", 00:19:05.338 "num_blocks": 8192, 00:19:05.338 "block_size": 4096, 00:19:05.338 "physical_block_size": 4096, 00:19:05.338 "uuid": "baa7f186-cd32-4bfa-82bb-bfef74df1766", 00:19:05.338 "optimal_io_boundary": 0, 00:19:05.338 "md_size": 0, 00:19:05.338 "dif_type": 0, 00:19:05.338 "dif_is_head_of_md": false, 00:19:05.338 "dif_pi_format": 0 00:19:05.338 } 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "method": "bdev_wait_for_examine" 00:19:05.338 } 00:19:05.338 ] 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "subsystem": "nbd", 00:19:05.338 "config": [] 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "subsystem": "scheduler", 00:19:05.338 "config": [ 00:19:05.338 { 00:19:05.338 "method": "framework_set_scheduler", 00:19:05.338 "params": { 00:19:05.338 "name": "static" 00:19:05.338 } 00:19:05.338 } 00:19:05.338 ] 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "subsystem": "nvmf", 00:19:05.338 "config": [ 00:19:05.338 { 00:19:05.338 "method": "nvmf_set_config", 00:19:05.338 "params": { 00:19:05.338 "discovery_filter": "match_any", 00:19:05.338 "admin_cmd_passthru": { 00:19:05.338 "identify_ctrlr": false 00:19:05.338 }, 00:19:05.338 "dhchap_digests": [ 00:19:05.338 "sha256", 00:19:05.338 "sha384", 00:19:05.338 "sha512" 00:19:05.338 ], 00:19:05.338 "dhchap_dhgroups": [ 00:19:05.338 "null", 00:19:05.338 "ffdhe2048", 00:19:05.338 "ffdhe3072", 00:19:05.338 "ffdhe4096", 00:19:05.338 "ffdhe6144", 00:19:05.338 "ffdhe8192" 00:19:05.338 ] 00:19:05.338 } 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "method": "nvmf_set_max_subsystems", 00:19:05.338 "params": { 00:19:05.338 "max_subsystems": 1024 00:19:05.338 } 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "method": "nvmf_set_crdt", 00:19:05.338 "params": { 00:19:05.338 "crdt1": 0, 00:19:05.338 "crdt2": 0, 00:19:05.338 "crdt3": 0 00:19:05.338 } 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "method": "nvmf_create_transport", 00:19:05.338 "params": { 00:19:05.338 "trtype": "TCP", 00:19:05.338 "max_queue_depth": 128, 00:19:05.338 "max_io_qpairs_per_ctrlr": 127, 00:19:05.338 "in_capsule_data_size": 4096, 00:19:05.338 "max_io_size": 131072, 00:19:05.338 "io_unit_size": 131072, 00:19:05.338 "max_aq_depth": 128, 00:19:05.338 "num_shared_buffers": 511, 00:19:05.338 "buf_cache_size": 4294967295, 00:19:05.338 "dif_insert_or_strip": false, 00:19:05.338 "zcopy": false, 00:19:05.338 "c2h_success": false, 00:19:05.338 "sock_priority": 0, 00:19:05.338 "abort_timeout_sec": 1, 00:19:05.338 "ack_timeout": 0, 00:19:05.338 "data_wr_pool_size": 0 00:19:05.338 } 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "method": "nvmf_create_subsystem", 00:19:05.338 "params": { 00:19:05.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.338 "allow_any_host": false, 00:19:05.338 "serial_number": "SPDK00000000000001", 00:19:05.338 "model_number": "SPDK bdev Controller", 00:19:05.338 "max_namespaces": 10, 00:19:05.338 "min_cntlid": 1, 00:19:05.338 "max_cntlid": 65519, 00:19:05.338 "ana_reporting": false 00:19:05.338 } 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "method": "nvmf_subsystem_add_host", 00:19:05.338 "params": { 00:19:05.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.338 "host": "nqn.2016-06.io.spdk:host1", 00:19:05.338 "psk": "key0" 00:19:05.338 } 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "method": "nvmf_subsystem_add_ns", 00:19:05.338 "params": { 00:19:05.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.338 "namespace": { 00:19:05.338 "nsid": 1, 00:19:05.338 "bdev_name": "malloc0", 00:19:05.338 "nguid": "BAA7F186CD324BFA82BBBFEF74DF1766", 00:19:05.338 "uuid": "baa7f186-cd32-4bfa-82bb-bfef74df1766", 00:19:05.338 "no_auto_visible": false 00:19:05.338 } 00:19:05.338 } 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "method": "nvmf_subsystem_add_listener", 00:19:05.338 "params": { 00:19:05.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.338 "listen_address": { 00:19:05.338 "trtype": "TCP", 00:19:05.338 "adrfam": "IPv4", 00:19:05.338 "traddr": "10.0.0.2", 00:19:05.338 "trsvcid": "4420" 00:19:05.338 }, 00:19:05.338 "secure_channel": true 00:19:05.338 } 00:19:05.338 } 00:19:05.338 ] 00:19:05.338 } 00:19:05.338 ] 00:19:05.338 }' 00:19:05.338 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:05.338 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:05.338 "subsystems": [ 00:19:05.338 { 00:19:05.338 "subsystem": "keyring", 00:19:05.338 "config": [ 00:19:05.338 { 00:19:05.338 "method": "keyring_file_add_key", 00:19:05.338 "params": { 00:19:05.338 "name": "key0", 00:19:05.338 "path": "/tmp/tmp.XCh5ManDQo" 00:19:05.338 } 00:19:05.338 } 00:19:05.338 ] 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "subsystem": "iobuf", 00:19:05.338 "config": [ 00:19:05.338 { 00:19:05.338 "method": "iobuf_set_options", 00:19:05.338 "params": { 00:19:05.338 "small_pool_count": 8192, 00:19:05.338 "large_pool_count": 1024, 00:19:05.338 "small_bufsize": 8192, 00:19:05.338 "large_bufsize": 135168, 00:19:05.338 "enable_numa": false 00:19:05.338 } 00:19:05.338 } 00:19:05.338 ] 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "subsystem": "sock", 00:19:05.338 "config": [ 00:19:05.338 { 00:19:05.338 "method": "sock_set_default_impl", 00:19:05.338 "params": { 00:19:05.338 "impl_name": "posix" 00:19:05.338 } 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "method": "sock_impl_set_options", 00:19:05.338 "params": { 00:19:05.338 "impl_name": "ssl", 00:19:05.338 "recv_buf_size": 4096, 00:19:05.338 "send_buf_size": 4096, 00:19:05.338 "enable_recv_pipe": true, 00:19:05.338 "enable_quickack": false, 00:19:05.338 "enable_placement_id": 0, 00:19:05.338 "enable_zerocopy_send_server": true, 00:19:05.338 "enable_zerocopy_send_client": false, 00:19:05.338 "zerocopy_threshold": 0, 00:19:05.338 "tls_version": 0, 00:19:05.338 "enable_ktls": false 00:19:05.338 } 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "method": "sock_impl_set_options", 00:19:05.338 "params": { 00:19:05.338 "impl_name": "posix", 00:19:05.338 "recv_buf_size": 2097152, 00:19:05.338 "send_buf_size": 2097152, 00:19:05.338 "enable_recv_pipe": true, 00:19:05.338 "enable_quickack": false, 00:19:05.338 "enable_placement_id": 0, 00:19:05.338 "enable_zerocopy_send_server": true, 00:19:05.338 "enable_zerocopy_send_client": false, 00:19:05.338 "zerocopy_threshold": 0, 00:19:05.338 "tls_version": 0, 00:19:05.338 "enable_ktls": false 00:19:05.338 } 00:19:05.338 } 00:19:05.338 ] 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "subsystem": "vmd", 00:19:05.339 "config": [] 00:19:05.339 }, 00:19:05.339 { 00:19:05.339 "subsystem": "accel", 00:19:05.339 "config": [ 00:19:05.339 { 00:19:05.339 "method": "accel_set_options", 00:19:05.339 "params": { 00:19:05.339 "small_cache_size": 128, 00:19:05.339 "large_cache_size": 16, 00:19:05.339 "task_count": 2048, 00:19:05.339 "sequence_count": 2048, 00:19:05.339 "buf_count": 2048 00:19:05.339 } 00:19:05.339 } 00:19:05.339 ] 00:19:05.339 }, 00:19:05.339 { 00:19:05.339 "subsystem": "bdev", 00:19:05.339 "config": [ 00:19:05.339 { 00:19:05.339 "method": "bdev_set_options", 00:19:05.339 "params": { 00:19:05.339 "bdev_io_pool_size": 65535, 00:19:05.339 "bdev_io_cache_size": 256, 00:19:05.339 "bdev_auto_examine": true, 00:19:05.339 "iobuf_small_cache_size": 128, 00:19:05.339 "iobuf_large_cache_size": 16 00:19:05.339 } 00:19:05.339 }, 00:19:05.339 { 00:19:05.339 "method": "bdev_raid_set_options", 00:19:05.339 "params": { 00:19:05.339 "process_window_size_kb": 1024, 00:19:05.339 "process_max_bandwidth_mb_sec": 0 00:19:05.339 } 00:19:05.339 }, 00:19:05.339 { 00:19:05.339 "method": "bdev_iscsi_set_options", 00:19:05.339 "params": { 00:19:05.339 "timeout_sec": 30 00:19:05.339 } 00:19:05.339 }, 00:19:05.339 { 00:19:05.339 "method": "bdev_nvme_set_options", 00:19:05.339 "params": { 00:19:05.339 "action_on_timeout": "none", 00:19:05.339 "timeout_us": 0, 00:19:05.339 "timeout_admin_us": 0, 00:19:05.339 "keep_alive_timeout_ms": 10000, 00:19:05.339 "arbitration_burst": 0, 00:19:05.339 "low_priority_weight": 0, 00:19:05.339 "medium_priority_weight": 0, 00:19:05.339 "high_priority_weight": 0, 00:19:05.339 "nvme_adminq_poll_period_us": 10000, 00:19:05.339 "nvme_ioq_poll_period_us": 0, 00:19:05.339 "io_queue_requests": 512, 00:19:05.339 "delay_cmd_submit": true, 00:19:05.339 "transport_retry_count": 4, 00:19:05.339 "bdev_retry_count": 3, 00:19:05.339 "transport_ack_timeout": 0, 00:19:05.339 "ctrlr_loss_timeout_sec": 0, 00:19:05.339 "reconnect_delay_sec": 0, 00:19:05.339 "fast_io_fail_timeout_sec": 0, 00:19:05.339 "disable_auto_failback": false, 00:19:05.339 "generate_uuids": false, 00:19:05.339 "transport_tos": 0, 00:19:05.339 "nvme_error_stat": false, 00:19:05.339 "rdma_srq_size": 0, 00:19:05.339 "io_path_stat": false, 00:19:05.339 "allow_accel_sequence": false, 00:19:05.339 "rdma_max_cq_size": 0, 00:19:05.339 "rdma_cm_event_timeout_ms": 0, 00:19:05.339 "dhchap_digests": [ 00:19:05.339 "sha256", 00:19:05.339 "sha384", 00:19:05.339 "sha512" 00:19:05.339 ], 00:19:05.339 "dhchap_dhgroups": [ 00:19:05.339 "null", 00:19:05.339 "ffdhe2048", 00:19:05.339 "ffdhe3072", 00:19:05.339 "ffdhe4096", 00:19:05.339 "ffdhe6144", 00:19:05.339 "ffdhe8192" 00:19:05.339 ] 00:19:05.339 } 00:19:05.339 }, 00:19:05.339 { 00:19:05.339 "method": "bdev_nvme_attach_controller", 00:19:05.339 "params": { 00:19:05.339 "name": "TLSTEST", 00:19:05.339 "trtype": "TCP", 00:19:05.339 "adrfam": "IPv4", 00:19:05.339 "traddr": "10.0.0.2", 00:19:05.339 "trsvcid": "4420", 00:19:05.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.339 "prchk_reftag": false, 00:19:05.339 "prchk_guard": false, 00:19:05.339 "ctrlr_loss_timeout_sec": 0, 00:19:05.339 "reconnect_delay_sec": 0, 00:19:05.339 "fast_io_fail_timeout_sec": 0, 00:19:05.339 "psk": "key0", 00:19:05.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.339 "hdgst": false, 00:19:05.339 "ddgst": false, 00:19:05.339 "multipath": "multipath" 00:19:05.339 } 00:19:05.339 }, 00:19:05.339 { 00:19:05.339 "method": "bdev_nvme_set_hotplug", 00:19:05.339 "params": { 00:19:05.339 "period_us": 100000, 00:19:05.339 "enable": false 00:19:05.339 } 00:19:05.339 }, 00:19:05.339 { 00:19:05.339 "method": "bdev_wait_for_examine" 00:19:05.339 } 00:19:05.339 ] 00:19:05.339 }, 00:19:05.339 { 00:19:05.339 "subsystem": "nbd", 00:19:05.339 "config": [] 00:19:05.339 } 00:19:05.339 ] 00:19:05.339 }' 00:19:05.339 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2470164 00:19:05.339 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2470164 ']' 00:19:05.339 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2470164 00:19:05.339 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:05.339 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.339 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2470164 00:19:05.597 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:05.597 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:05.597 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2470164' 00:19:05.597 killing process with pid 2470164 00:19:05.597 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2470164 00:19:05.597 Received shutdown signal, test time was about 10.000000 seconds 00:19:05.597 00:19:05.597 Latency(us) 00:19:05.597 [2024-11-27T07:01:59.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.597 [2024-11-27T07:01:59.706Z] =================================================================================================================== 00:19:05.597 [2024-11-27T07:01:59.707Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:05.598 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2470164 00:19:05.598 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2469904 00:19:05.598 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2469904 ']' 00:19:05.598 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2469904 00:19:05.598 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:05.598 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.598 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2469904 00:19:05.598 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:05.598 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:05.598 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2469904' 00:19:05.598 killing process with pid 2469904 00:19:05.598 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2469904 00:19:05.598 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2469904 00:19:05.857 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:05.857 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:05.857 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:05.857 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:05.857 "subsystems": [ 00:19:05.857 { 00:19:05.857 "subsystem": "keyring", 00:19:05.857 "config": [ 00:19:05.857 { 00:19:05.857 "method": "keyring_file_add_key", 00:19:05.857 "params": { 00:19:05.857 "name": "key0", 00:19:05.857 "path": "/tmp/tmp.XCh5ManDQo" 00:19:05.857 } 00:19:05.857 } 00:19:05.857 ] 00:19:05.857 }, 00:19:05.857 { 00:19:05.857 "subsystem": "iobuf", 00:19:05.857 "config": [ 00:19:05.857 { 00:19:05.857 "method": "iobuf_set_options", 00:19:05.857 "params": { 00:19:05.857 "small_pool_count": 8192, 00:19:05.857 "large_pool_count": 1024, 00:19:05.857 "small_bufsize": 8192, 00:19:05.857 "large_bufsize": 135168, 00:19:05.857 "enable_numa": false 00:19:05.857 } 00:19:05.857 } 00:19:05.857 ] 00:19:05.857 }, 00:19:05.857 { 00:19:05.857 "subsystem": "sock", 00:19:05.857 "config": [ 00:19:05.857 { 00:19:05.857 "method": "sock_set_default_impl", 00:19:05.857 "params": { 00:19:05.857 "impl_name": "posix" 00:19:05.857 } 00:19:05.857 }, 00:19:05.857 { 00:19:05.857 "method": "sock_impl_set_options", 00:19:05.857 "params": { 00:19:05.857 "impl_name": "ssl", 00:19:05.857 "recv_buf_size": 4096, 00:19:05.857 "send_buf_size": 4096, 00:19:05.857 "enable_recv_pipe": true, 00:19:05.857 "enable_quickack": false, 00:19:05.857 "enable_placement_id": 0, 00:19:05.857 "enable_zerocopy_send_server": true, 00:19:05.857 "enable_zerocopy_send_client": false, 00:19:05.857 "zerocopy_threshold": 0, 00:19:05.857 "tls_version": 0, 00:19:05.857 "enable_ktls": false 00:19:05.857 } 00:19:05.857 }, 00:19:05.857 { 00:19:05.857 "method": "sock_impl_set_options", 00:19:05.857 "params": { 00:19:05.857 "impl_name": "posix", 00:19:05.857 "recv_buf_size": 2097152, 00:19:05.857 "send_buf_size": 2097152, 00:19:05.857 "enable_recv_pipe": true, 00:19:05.857 "enable_quickack": false, 00:19:05.857 "enable_placement_id": 0, 00:19:05.857 "enable_zerocopy_send_server": true, 00:19:05.857 "enable_zerocopy_send_client": false, 00:19:05.857 "zerocopy_threshold": 0, 00:19:05.857 "tls_version": 0, 00:19:05.857 "enable_ktls": false 00:19:05.857 } 00:19:05.857 } 00:19:05.857 ] 00:19:05.857 }, 00:19:05.857 { 00:19:05.857 "subsystem": "vmd", 00:19:05.857 "config": [] 00:19:05.857 }, 00:19:05.857 { 00:19:05.857 "subsystem": "accel", 00:19:05.858 "config": [ 00:19:05.858 { 00:19:05.858 "method": "accel_set_options", 00:19:05.858 "params": { 00:19:05.858 "small_cache_size": 128, 00:19:05.858 "large_cache_size": 16, 00:19:05.858 "task_count": 2048, 00:19:05.858 "sequence_count": 2048, 00:19:05.858 "buf_count": 2048 00:19:05.858 } 00:19:05.858 } 00:19:05.858 ] 00:19:05.858 }, 00:19:05.858 { 00:19:05.858 "subsystem": "bdev", 00:19:05.858 "config": [ 00:19:05.858 { 00:19:05.858 "method": "bdev_set_options", 00:19:05.858 "params": { 00:19:05.858 "bdev_io_pool_size": 65535, 00:19:05.858 "bdev_io_cache_size": 256, 00:19:05.858 "bdev_auto_examine": true, 00:19:05.858 "iobuf_small_cache_size": 128, 00:19:05.858 "iobuf_large_cache_size": 16 00:19:05.858 } 00:19:05.858 }, 00:19:05.858 { 00:19:05.858 "method": "bdev_raid_set_options", 00:19:05.858 "params": { 00:19:05.858 "process_window_size_kb": 1024, 00:19:05.858 "process_max_bandwidth_mb_sec": 0 00:19:05.858 } 00:19:05.858 }, 00:19:05.858 { 00:19:05.858 "method": "bdev_iscsi_set_options", 00:19:05.858 "params": { 00:19:05.858 "timeout_sec": 30 00:19:05.858 } 00:19:05.858 }, 00:19:05.858 { 00:19:05.858 "method": "bdev_nvme_set_options", 00:19:05.858 "params": { 00:19:05.858 "action_on_timeout": "none", 00:19:05.858 "timeout_us": 0, 00:19:05.858 "timeout_admin_us": 0, 00:19:05.858 "keep_alive_timeout_ms": 10000, 00:19:05.858 "arbitration_burst": 0, 00:19:05.858 "low_priority_weight": 0, 00:19:05.858 "medium_priority_weight": 0, 00:19:05.858 "high_priority_weight": 0, 00:19:05.858 "nvme_adminq_poll_period_us": 10000, 00:19:05.858 "nvme_ioq_poll_period_us": 0, 00:19:05.858 "io_queue_requests": 0, 00:19:05.858 "delay_cmd_submit": true, 00:19:05.858 "transport_retry_count": 4, 00:19:05.858 "bdev_retry_count": 3, 00:19:05.858 "transport_ack_timeout": 0, 00:19:05.858 "ctrlr_loss_timeout_sec": 0, 00:19:05.858 "reconnect_delay_sec": 0, 00:19:05.858 "fast_io_fail_timeout_sec": 0, 00:19:05.858 "disable_auto_failback": false, 00:19:05.858 "generate_uuids": false, 00:19:05.858 "transport_tos": 0, 00:19:05.858 "nvme_error_stat": false, 00:19:05.858 "rdma_srq_size": 0, 00:19:05.858 "io_path_stat": false, 00:19:05.858 "allow_accel_sequence": false, 00:19:05.858 "rdma_max_cq_size": 0, 00:19:05.858 "rdma_cm_event_timeout_ms": 0, 00:19:05.858 "dhchap_digests": [ 00:19:05.858 "sha256", 00:19:05.858 "sha384", 00:19:05.858 "sha512" 00:19:05.858 ], 00:19:05.858 "dhchap_dhgroups": [ 00:19:05.858 "null", 00:19:05.858 "ffdhe2048", 00:19:05.858 "ffdhe3072", 00:19:05.858 "ffdhe4096", 00:19:05.858 "ffdhe6144", 00:19:05.858 "ffdhe8192" 00:19:05.858 ] 00:19:05.858 } 00:19:05.858 }, 00:19:05.858 { 00:19:05.858 "method": "bdev_nvme_set_hotplug", 00:19:05.858 "params": { 00:19:05.858 "period_us": 100000, 00:19:05.858 "enable": false 00:19:05.858 } 00:19:05.858 }, 00:19:05.858 { 00:19:05.858 "method": "bdev_malloc_create", 00:19:05.858 "params": { 00:19:05.858 "name": "malloc0", 00:19:05.858 "num_blocks": 8192, 00:19:05.858 "block_size": 4096, 00:19:05.858 "physical_block_size": 4096, 00:19:05.858 "uuid": "baa7f186-cd32-4bfa-82bb-bfef74df1766", 00:19:05.858 "optimal_io_boundary": 0, 00:19:05.858 "md_size": 0, 00:19:05.858 "dif_type": 0, 00:19:05.858 "dif_is_head_of_md": false, 00:19:05.858 "dif_pi_format": 0 00:19:05.858 } 00:19:05.858 }, 00:19:05.858 { 00:19:05.858 "method": "bdev_wait_for_examine" 00:19:05.858 } 00:19:05.858 ] 00:19:05.858 }, 00:19:05.858 { 00:19:05.858 "subsystem": "nbd", 00:19:05.858 "config": [] 00:19:05.858 }, 00:19:05.858 { 00:19:05.858 "subsystem": "scheduler", 00:19:05.858 "config": [ 00:19:05.858 { 00:19:05.858 "method": "framework_set_scheduler", 00:19:05.858 "params": { 00:19:05.858 "name": "static" 00:19:05.858 } 00:19:05.858 } 00:19:05.858 ] 00:19:05.858 }, 00:19:05.858 { 00:19:05.858 "subsystem": "nvmf", 00:19:05.858 "config": [ 00:19:05.858 { 00:19:05.858 "method": "nvmf_set_config", 00:19:05.858 "params": { 00:19:05.858 "discovery_filter": "match_any", 00:19:05.858 "admin_cmd_passthru": { 00:19:05.858 "identify_ctrlr": false 00:19:05.858 }, 00:19:05.858 "dhchap_digests": [ 00:19:05.858 "sha256", 00:19:05.858 "sha384", 00:19:05.858 "sha512" 00:19:05.858 ], 00:19:05.858 "dhchap_dhgroups": [ 00:19:05.858 "null", 00:19:05.858 "ffdhe2048", 00:19:05.858 "ffdhe3072", 00:19:05.858 "ffdhe4096", 00:19:05.858 "ffdhe6144", 00:19:05.858 "ffdhe8192" 00:19:05.858 ] 00:19:05.858 } 00:19:05.858 }, 00:19:05.858 { 00:19:05.858 "method": "nvmf_set_max_subsystems", 00:19:05.858 "params": { 00:19:05.858 "max_subsystems": 1024 00:19:05.858 } 00:19:05.858 }, 00:19:05.858 { 00:19:05.858 "method": "nvmf_set_crdt", 00:19:05.858 "params": { 00:19:05.858 "crdt1": 0, 00:19:05.858 "crdt2": 0, 00:19:05.858 "crdt3": 0 00:19:05.858 } 00:19:05.858 }, 00:19:05.858 { 00:19:05.858 "method": "nvmf_create_transport", 00:19:05.858 "params": { 00:19:05.858 "trtype": "TCP", 00:19:05.858 "max_queue_depth": 128, 00:19:05.858 "max_io_qpairs_per_ctrlr": 127, 00:19:05.858 "in_capsule_data_size": 4096, 00:19:05.858 "max_io_size": 131072, 00:19:05.858 "io_unit_size": 131072, 00:19:05.858 "max_aq_depth": 128, 00:19:05.858 "num_shared_buffers": 511, 00:19:05.858 "buf_cache_size": 4294967295, 00:19:05.859 "dif_insert_or_strip": false, 00:19:05.859 "zcopy": false, 00:19:05.859 "c2h_success": false, 00:19:05.859 "sock_priority": 0, 00:19:05.859 "abort_timeout_sec": 1, 00:19:05.859 "ack_timeout": 0, 00:19:05.859 "data_wr_pool_size": 0 00:19:05.859 } 00:19:05.859 }, 00:19:05.859 { 00:19:05.859 "method": "nvmf_create_subsystem", 00:19:05.859 "params": { 00:19:05.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.859 "allow_any_host": false, 00:19:05.859 "serial_number": "SPDK00000000000001", 00:19:05.859 "model_number": "SPDK bdev Controller", 00:19:05.859 "max_namespaces": 10, 00:19:05.859 "min_cntlid": 1, 00:19:05.859 "max_cntlid": 65519, 00:19:05.859 "ana_reporting": false 00:19:05.859 } 00:19:05.859 }, 00:19:05.859 { 00:19:05.859 "method": "nvmf_subsystem_add_host", 00:19:05.859 "params": { 00:19:05.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.859 "host": "nqn.2016-06.io.spdk:host1", 00:19:05.859 "psk": "key0" 00:19:05.859 } 00:19:05.859 }, 00:19:05.859 { 00:19:05.859 "method": "nvmf_subsystem_add_ns", 00:19:05.859 "params": { 00:19:05.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.859 "namespace": { 00:19:05.859 "nsid": 1, 00:19:05.859 "bdev_name": "malloc0", 00:19:05.859 "nguid": "BAA7F186CD324BFA82BBBFEF74DF1766", 00:19:05.859 "uuid": "baa7f186-cd32-4bfa-82bb-bfef74df1766", 00:19:05.859 "no_auto_visible": false 00:19:05.859 } 00:19:05.859 } 00:19:05.859 }, 00:19:05.859 { 00:19:05.859 "method": "nvmf_subsystem_add_listener", 00:19:05.859 "params": { 00:19:05.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.859 "listen_address": { 00:19:05.859 "trtype": "TCP", 00:19:05.859 "adrfam": "IPv4", 00:19:05.859 "traddr": "10.0.0.2", 00:19:05.859 "trsvcid": "4420" 00:19:05.859 }, 00:19:05.859 "secure_channel": true 00:19:05.859 } 00:19:05.859 } 00:19:05.859 ] 00:19:05.859 } 00:19:05.859 ] 00:19:05.859 }' 00:19:05.859 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.859 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2470415 00:19:05.859 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:05.859 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2470415 00:19:05.859 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2470415 ']' 00:19:05.859 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.859 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.859 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.859 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.859 08:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.859 [2024-11-27 08:01:59.904789] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:19:05.859 [2024-11-27 08:01:59.904837] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.118 [2024-11-27 08:01:59.970179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.118 [2024-11-27 08:02:00.008322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.118 [2024-11-27 08:02:00.008358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.118 [2024-11-27 08:02:00.008366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.118 [2024-11-27 08:02:00.008373] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.118 [2024-11-27 08:02:00.008378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.118 [2024-11-27 08:02:00.008954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.118 [2024-11-27 08:02:00.223800] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.376 [2024-11-27 08:02:00.255818] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:06.376 [2024-11-27 08:02:00.256028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.633 08:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.633 08:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:06.633 08:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:06.633 08:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:06.633 08:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.891 08:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.892 08:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2470662 00:19:06.892 08:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2470662 /var/tmp/bdevperf.sock 00:19:06.892 08:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2470662 ']' 00:19:06.892 08:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:06.892 08:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:06.892 08:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.892 08:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:06.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:06.892 08:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:06.892 "subsystems": [ 00:19:06.892 { 00:19:06.892 "subsystem": "keyring", 00:19:06.892 "config": [ 00:19:06.892 { 00:19:06.892 "method": "keyring_file_add_key", 00:19:06.892 "params": { 00:19:06.892 "name": "key0", 00:19:06.892 "path": "/tmp/tmp.XCh5ManDQo" 00:19:06.892 } 00:19:06.892 } 00:19:06.892 ] 00:19:06.892 }, 00:19:06.892 { 00:19:06.892 "subsystem": "iobuf", 00:19:06.892 "config": [ 00:19:06.892 { 00:19:06.892 "method": "iobuf_set_options", 00:19:06.892 "params": { 00:19:06.892 "small_pool_count": 8192, 00:19:06.892 "large_pool_count": 1024, 00:19:06.892 "small_bufsize": 8192, 00:19:06.892 "large_bufsize": 135168, 00:19:06.892 "enable_numa": false 00:19:06.892 } 00:19:06.892 } 00:19:06.892 ] 00:19:06.892 }, 00:19:06.892 { 00:19:06.892 "subsystem": "sock", 00:19:06.892 "config": [ 00:19:06.892 { 00:19:06.892 "method": "sock_set_default_impl", 00:19:06.892 "params": { 00:19:06.892 "impl_name": "posix" 00:19:06.892 } 00:19:06.892 }, 00:19:06.892 { 00:19:06.892 "method": "sock_impl_set_options", 00:19:06.892 "params": { 00:19:06.892 "impl_name": "ssl", 00:19:06.892 "recv_buf_size": 4096, 00:19:06.892 "send_buf_size": 4096, 00:19:06.892 "enable_recv_pipe": true, 00:19:06.892 "enable_quickack": false, 00:19:06.892 "enable_placement_id": 0, 00:19:06.892 "enable_zerocopy_send_server": true, 00:19:06.892 "enable_zerocopy_send_client": false, 00:19:06.892 "zerocopy_threshold": 0, 00:19:06.892 "tls_version": 0, 00:19:06.892 "enable_ktls": false 00:19:06.892 } 00:19:06.892 }, 00:19:06.892 { 00:19:06.892 "method": "sock_impl_set_options", 00:19:06.892 "params": { 00:19:06.892 "impl_name": "posix", 00:19:06.892 "recv_buf_size": 2097152, 00:19:06.892 "send_buf_size": 2097152, 00:19:06.892 "enable_recv_pipe": true, 00:19:06.892 "enable_quickack": false, 00:19:06.892 "enable_placement_id": 0, 00:19:06.892 "enable_zerocopy_send_server": true, 00:19:06.892 "enable_zerocopy_send_client": false, 00:19:06.892 "zerocopy_threshold": 0, 00:19:06.892 "tls_version": 0, 00:19:06.892 "enable_ktls": false 00:19:06.892 } 00:19:06.892 } 00:19:06.892 ] 00:19:06.892 }, 00:19:06.892 { 00:19:06.892 "subsystem": "vmd", 00:19:06.892 "config": [] 00:19:06.892 }, 00:19:06.892 { 00:19:06.892 "subsystem": "accel", 00:19:06.892 "config": [ 00:19:06.892 { 00:19:06.892 "method": "accel_set_options", 00:19:06.892 "params": { 00:19:06.892 "small_cache_size": 128, 00:19:06.892 "large_cache_size": 16, 00:19:06.892 "task_count": 2048, 00:19:06.892 "sequence_count": 2048, 00:19:06.892 "buf_count": 2048 00:19:06.892 } 00:19:06.892 } 00:19:06.892 ] 00:19:06.892 }, 00:19:06.892 { 00:19:06.892 "subsystem": "bdev", 00:19:06.892 "config": [ 00:19:06.892 { 00:19:06.892 "method": "bdev_set_options", 00:19:06.892 "params": { 00:19:06.892 "bdev_io_pool_size": 65535, 00:19:06.892 "bdev_io_cache_size": 256, 00:19:06.892 "bdev_auto_examine": true, 00:19:06.892 "iobuf_small_cache_size": 128, 00:19:06.892 "iobuf_large_cache_size": 16 00:19:06.892 } 00:19:06.892 }, 00:19:06.892 { 00:19:06.892 "method": "bdev_raid_set_options", 00:19:06.892 "params": { 00:19:06.892 "process_window_size_kb": 1024, 00:19:06.892 "process_max_bandwidth_mb_sec": 0 00:19:06.892 } 00:19:06.892 }, 00:19:06.892 { 00:19:06.892 "method": "bdev_iscsi_set_options", 00:19:06.892 "params": { 00:19:06.892 "timeout_sec": 30 00:19:06.892 } 00:19:06.892 }, 00:19:06.892 { 00:19:06.892 "method": "bdev_nvme_set_options", 00:19:06.892 "params": { 00:19:06.892 "action_on_timeout": "none", 00:19:06.892 "timeout_us": 0, 00:19:06.892 "timeout_admin_us": 0, 00:19:06.892 "keep_alive_timeout_ms": 10000, 00:19:06.892 "arbitration_burst": 0, 00:19:06.892 "low_priority_weight": 0, 00:19:06.892 "medium_priority_weight": 0, 00:19:06.892 "high_priority_weight": 0, 00:19:06.892 "nvme_adminq_poll_period_us": 10000, 00:19:06.892 "nvme_ioq_poll_period_us": 0, 00:19:06.892 "io_queue_requests": 512, 00:19:06.892 "delay_cmd_submit": true, 00:19:06.892 "transport_retry_count": 4, 00:19:06.892 "bdev_retry_count": 3, 00:19:06.892 "transport_ack_timeout": 0, 00:19:06.892 "ctrlr_loss_timeout_sec": 0, 00:19:06.892 "reconnect_delay_sec": 0, 00:19:06.892 "fast_io_fail_timeout_sec": 0, 00:19:06.892 "disable_auto_failback": false, 00:19:06.892 "generate_uuids": false, 00:19:06.892 "transport_tos": 0, 00:19:06.892 "nvme_error_stat": false, 00:19:06.892 "rdma_srq_size": 0, 00:19:06.892 "io_path_stat": false, 00:19:06.892 "allow_accel_sequence": false, 00:19:06.892 "rdma_max_cq_size": 0, 00:19:06.892 "rdma_cm_event_timeout_ms": 0, 00:19:06.892 "dhchap_digests": [ 00:19:06.892 "sha256", 00:19:06.892 "sha384", 00:19:06.892 "sha512" 00:19:06.892 ], 00:19:06.892 "dhchap_dhgroups": [ 00:19:06.892 "null", 00:19:06.892 "ffdhe2048", 00:19:06.892 "ffdhe3072", 00:19:06.892 "ffdhe4096", 00:19:06.892 "ffdhe6144", 00:19:06.892 "ffdhe8192" 00:19:06.892 ] 00:19:06.892 } 00:19:06.892 }, 00:19:06.892 { 00:19:06.892 "method": "bdev_nvme_attach_controller", 00:19:06.892 "params": { 00:19:06.892 "name": "TLSTEST", 00:19:06.892 "trtype": "TCP", 00:19:06.892 "adrfam": "IPv4", 00:19:06.892 "traddr": "10.0.0.2", 00:19:06.892 "trsvcid": "4420", 00:19:06.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.892 "prchk_reftag": false, 00:19:06.892 "prchk_guard": false, 00:19:06.892 "ctrlr_loss_timeout_sec": 0, 00:19:06.892 "reconnect_delay_sec": 0, 00:19:06.892 "fast_io_fail_timeout_sec": 0, 00:19:06.892 "psk": "key0", 00:19:06.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:06.892 "hdgst": false, 00:19:06.892 "ddgst": false, 00:19:06.892 "multipath": "multipath" 00:19:06.892 } 00:19:06.892 }, 00:19:06.892 { 00:19:06.892 "method": "bdev_nvme_set_hotplug", 00:19:06.892 "params": { 00:19:06.892 "period_us": 100000, 00:19:06.892 "enable": false 00:19:06.892 } 00:19:06.892 }, 00:19:06.892 { 00:19:06.892 "method": "bdev_wait_for_examine" 00:19:06.892 } 00:19:06.892 ] 00:19:06.892 }, 00:19:06.892 { 00:19:06.892 "subsystem": "nbd", 00:19:06.892 "config": [] 00:19:06.892 } 00:19:06.892 ] 00:19:06.892 }' 00:19:06.892 08:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.892 08:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.892 [2024-11-27 08:02:00.815624] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:19:06.892 [2024-11-27 08:02:00.815670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2470662 ] 00:19:06.892 [2024-11-27 08:02:00.873601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.892 [2024-11-27 08:02:00.916591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.150 [2024-11-27 08:02:01.070872] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:07.716 08:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.716 08:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:07.716 08:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:07.716 Running I/O for 10 seconds... 00:19:10.024 5538.00 IOPS, 21.63 MiB/s [2024-11-27T07:02:05.067Z] 5553.00 IOPS, 21.69 MiB/s [2024-11-27T07:02:06.000Z] 5501.33 IOPS, 21.49 MiB/s [2024-11-27T07:02:06.934Z] 5485.00 IOPS, 21.43 MiB/s [2024-11-27T07:02:07.868Z] 5477.20 IOPS, 21.40 MiB/s [2024-11-27T07:02:08.802Z] 5476.50 IOPS, 21.39 MiB/s [2024-11-27T07:02:10.175Z] 5482.00 IOPS, 21.41 MiB/s [2024-11-27T07:02:11.106Z] 5478.38 IOPS, 21.40 MiB/s [2024-11-27T07:02:12.041Z] 5477.11 IOPS, 21.39 MiB/s [2024-11-27T07:02:12.041Z] 5482.30 IOPS, 21.42 MiB/s 00:19:17.932 Latency(us) 00:19:17.932 [2024-11-27T07:02:12.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.932 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:17.932 Verification LBA range: start 0x0 length 0x2000 00:19:17.932 TLSTESTn1 : 10.02 5484.65 21.42 0.00 0.00 23298.68 7408.42 24732.72 00:19:17.932 [2024-11-27T07:02:12.041Z] =================================================================================================================== 00:19:17.932 [2024-11-27T07:02:12.041Z] Total : 5484.65 21.42 0.00 0.00 23298.68 7408.42 24732.72 00:19:17.932 { 00:19:17.932 "results": [ 00:19:17.932 { 00:19:17.932 "job": "TLSTESTn1", 00:19:17.932 "core_mask": "0x4", 00:19:17.932 "workload": "verify", 00:19:17.932 "status": "finished", 00:19:17.932 "verify_range": { 00:19:17.932 "start": 0, 00:19:17.932 "length": 8192 00:19:17.932 }, 00:19:17.932 "queue_depth": 128, 00:19:17.932 "io_size": 4096, 00:19:17.932 "runtime": 10.018863, 00:19:17.932 "iops": 5484.654296600323, 00:19:17.932 "mibps": 21.42443084609501, 00:19:17.932 "io_failed": 0, 00:19:17.932 "io_timeout": 0, 00:19:17.932 "avg_latency_us": 23298.681399723067, 00:19:17.932 "min_latency_us": 7408.417391304348, 00:19:17.932 "max_latency_us": 24732.71652173913 00:19:17.932 } 00:19:17.932 ], 00:19:17.932 "core_count": 1 00:19:17.932 } 00:19:17.932 08:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:17.932 08:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2470662 00:19:17.932 08:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2470662 ']' 00:19:17.932 08:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2470662 00:19:17.932 08:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:17.932 08:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.932 08:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2470662 00:19:17.932 08:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:17.932 08:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:17.932 08:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2470662' 00:19:17.932 killing process with pid 2470662 00:19:17.932 08:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2470662 00:19:17.932 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.932 00:19:17.932 Latency(us) 00:19:17.932 [2024-11-27T07:02:12.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.932 [2024-11-27T07:02:12.041Z] =================================================================================================================== 00:19:17.932 [2024-11-27T07:02:12.041Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.932 08:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2470662 00:19:17.932 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2470415 00:19:17.932 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2470415 ']' 00:19:17.932 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2470415 00:19:17.932 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:17.932 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.932 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2470415 00:19:18.191 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:18.191 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:18.191 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2470415' 00:19:18.191 killing process with pid 2470415 00:19:18.191 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2470415 00:19:18.191 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2470415 00:19:18.191 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:18.191 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:18.191 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:18.191 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.191 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:18.191 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2472506 00:19:18.191 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2472506 00:19:18.191 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2472506 ']' 00:19:18.191 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.191 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.191 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.191 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.191 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.191 [2024-11-27 08:02:12.290928] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:19:18.191 [2024-11-27 08:02:12.290979] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.449 [2024-11-27 08:02:12.352325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.449 [2024-11-27 08:02:12.393413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.449 [2024-11-27 08:02:12.393449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.449 [2024-11-27 08:02:12.393458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.449 [2024-11-27 08:02:12.393464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.449 [2024-11-27 08:02:12.393469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.449 [2024-11-27 08:02:12.394025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.449 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.449 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:18.449 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:18.449 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:18.449 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.449 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.449 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.XCh5ManDQo 00:19:18.449 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.XCh5ManDQo 00:19:18.449 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:18.707 [2024-11-27 08:02:12.691301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.707 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:18.965 08:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:18.965 [2024-11-27 08:02:13.072299] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:18.965 [2024-11-27 08:02:13.072508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.223 08:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:19.223 malloc0 00:19:19.223 08:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:19.480 08:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.XCh5ManDQo 00:19:19.739 08:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:19.997 08:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2472759 00:19:19.997 08:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:19.997 08:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:19.997 08:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2472759 /var/tmp/bdevperf.sock 00:19:19.997 08:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2472759 ']' 00:19:19.997 08:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.997 08:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.997 08:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.997 08:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.997 08:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.997 [2024-11-27 08:02:13.908666] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:19:19.997 [2024-11-27 08:02:13.908716] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472759 ] 00:19:19.997 [2024-11-27 08:02:13.970093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.997 [2024-11-27 08:02:14.012650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.255 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.255 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:20.255 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XCh5ManDQo 00:19:20.255 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:20.513 [2024-11-27 08:02:14.470424] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:20.513 nvme0n1 00:19:20.513 08:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:20.772 Running I/O for 1 seconds... 00:19:21.706 5291.00 IOPS, 20.67 MiB/s 00:19:21.706 Latency(us) 00:19:21.706 [2024-11-27T07:02:15.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.706 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:21.706 Verification LBA range: start 0x0 length 0x2000 00:19:21.706 nvme0n1 : 1.02 5333.30 20.83 0.00 0.00 23813.78 4957.94 21085.50 00:19:21.706 [2024-11-27T07:02:15.815Z] =================================================================================================================== 00:19:21.706 [2024-11-27T07:02:15.815Z] Total : 5333.30 20.83 0.00 0.00 23813.78 4957.94 21085.50 00:19:21.706 { 00:19:21.706 "results": [ 00:19:21.706 { 00:19:21.706 "job": "nvme0n1", 00:19:21.706 "core_mask": "0x2", 00:19:21.706 "workload": "verify", 00:19:21.706 "status": "finished", 00:19:21.706 "verify_range": { 00:19:21.706 "start": 0, 00:19:21.706 "length": 8192 00:19:21.706 }, 00:19:21.706 "queue_depth": 128, 00:19:21.706 "io_size": 4096, 00:19:21.706 "runtime": 1.016069, 00:19:21.706 "iops": 5333.2992149155225, 00:19:21.706 "mibps": 20.83320005826376, 00:19:21.706 "io_failed": 0, 00:19:21.706 "io_timeout": 0, 00:19:21.706 "avg_latency_us": 23813.78125339987, 00:19:21.706 "min_latency_us": 4957.940869565217, 00:19:21.706 "max_latency_us": 21085.49565217391 00:19:21.706 } 00:19:21.706 ], 00:19:21.706 "core_count": 1 00:19:21.706 } 00:19:21.706 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2472759 00:19:21.706 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2472759 ']' 00:19:21.706 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2472759 00:19:21.706 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:21.706 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.706 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2472759 00:19:21.706 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:21.706 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:21.706 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2472759' 00:19:21.706 killing process with pid 2472759 00:19:21.706 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2472759 00:19:21.706 Received shutdown signal, test time was about 1.000000 seconds 00:19:21.706 00:19:21.706 Latency(us) 00:19:21.706 [2024-11-27T07:02:15.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.706 [2024-11-27T07:02:15.815Z] =================================================================================================================== 00:19:21.706 [2024-11-27T07:02:15.815Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.706 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2472759 00:19:21.964 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2472506 00:19:21.965 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2472506 ']' 00:19:21.965 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2472506 00:19:21.965 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:21.965 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.965 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2472506 00:19:21.965 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.965 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.965 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2472506' 00:19:21.965 killing process with pid 2472506 00:19:21.965 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2472506 00:19:21.965 08:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2472506 00:19:22.223 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:22.223 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:22.223 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:22.223 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.223 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:22.223 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2473064 00:19:22.223 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2473064 00:19:22.223 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2473064 ']' 00:19:22.223 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.223 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.223 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.223 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.223 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.223 [2024-11-27 08:02:16.175148] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:19:22.223 [2024-11-27 08:02:16.175193] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.223 [2024-11-27 08:02:16.237767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.223 [2024-11-27 08:02:16.280075] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.223 [2024-11-27 08:02:16.280112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.223 [2024-11-27 08:02:16.280119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.223 [2024-11-27 08:02:16.280126] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.223 [2024-11-27 08:02:16.280131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.223 [2024-11-27 08:02:16.280696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.482 [2024-11-27 08:02:16.418577] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.482 malloc0 00:19:22.482 [2024-11-27 08:02:16.446813] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:22.482 [2024-11-27 08:02:16.447047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2473250 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2473250 /var/tmp/bdevperf.sock 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2473250 ']' 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.482 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.482 [2024-11-27 08:02:16.509147] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:19:22.482 [2024-11-27 08:02:16.509189] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473250 ] 00:19:22.482 [2024-11-27 08:02:16.570491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.740 [2024-11-27 08:02:16.612253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.740 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.740 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:22.740 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XCh5ManDQo 00:19:22.998 08:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:22.998 [2024-11-27 08:02:17.069111] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.256 nvme0n1 00:19:23.256 08:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:23.256 Running I/O for 1 seconds... 00:19:24.190 5178.00 IOPS, 20.23 MiB/s 00:19:24.190 Latency(us) 00:19:24.190 [2024-11-27T07:02:18.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.190 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:24.190 Verification LBA range: start 0x0 length 0x2000 00:19:24.190 nvme0n1 : 1.01 5227.99 20.42 0.00 0.00 24303.88 6268.66 33052.94 00:19:24.190 [2024-11-27T07:02:18.299Z] =================================================================================================================== 00:19:24.190 [2024-11-27T07:02:18.299Z] Total : 5227.99 20.42 0.00 0.00 24303.88 6268.66 33052.94 00:19:24.190 { 00:19:24.190 "results": [ 00:19:24.190 { 00:19:24.190 "job": "nvme0n1", 00:19:24.190 "core_mask": "0x2", 00:19:24.190 "workload": "verify", 00:19:24.190 "status": "finished", 00:19:24.190 "verify_range": { 00:19:24.190 "start": 0, 00:19:24.190 "length": 8192 00:19:24.190 }, 00:19:24.190 "queue_depth": 128, 00:19:24.190 "io_size": 4096, 00:19:24.190 "runtime": 1.014922, 00:19:24.190 "iops": 5227.9879636070555, 00:19:24.190 "mibps": 20.42182798284006, 00:19:24.190 "io_failed": 0, 00:19:24.190 "io_timeout": 0, 00:19:24.190 "avg_latency_us": 24303.88205640866, 00:19:24.190 "min_latency_us": 6268.660869565218, 00:19:24.190 "max_latency_us": 33052.93913043478 00:19:24.190 } 00:19:24.190 ], 00:19:24.190 "core_count": 1 00:19:24.190 } 00:19:24.190 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:24.190 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.190 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.447 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.447 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:24.447 "subsystems": [ 00:19:24.447 { 00:19:24.447 "subsystem": "keyring", 00:19:24.447 "config": [ 00:19:24.447 { 00:19:24.447 "method": "keyring_file_add_key", 00:19:24.447 "params": { 00:19:24.447 "name": "key0", 00:19:24.447 "path": "/tmp/tmp.XCh5ManDQo" 00:19:24.447 } 00:19:24.447 } 00:19:24.447 ] 00:19:24.447 }, 00:19:24.447 { 00:19:24.447 "subsystem": "iobuf", 00:19:24.447 "config": [ 00:19:24.447 { 00:19:24.447 "method": "iobuf_set_options", 00:19:24.447 "params": { 00:19:24.447 "small_pool_count": 8192, 00:19:24.447 "large_pool_count": 1024, 00:19:24.447 "small_bufsize": 8192, 00:19:24.448 "large_bufsize": 135168, 00:19:24.448 "enable_numa": false 00:19:24.448 } 00:19:24.448 } 00:19:24.448 ] 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "subsystem": "sock", 00:19:24.448 "config": [ 00:19:24.448 { 00:19:24.448 "method": "sock_set_default_impl", 00:19:24.448 "params": { 00:19:24.448 "impl_name": "posix" 00:19:24.448 } 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "method": "sock_impl_set_options", 00:19:24.448 "params": { 00:19:24.448 "impl_name": "ssl", 00:19:24.448 "recv_buf_size": 4096, 00:19:24.448 "send_buf_size": 4096, 00:19:24.448 "enable_recv_pipe": true, 00:19:24.448 "enable_quickack": false, 00:19:24.448 "enable_placement_id": 0, 00:19:24.448 "enable_zerocopy_send_server": true, 00:19:24.448 "enable_zerocopy_send_client": false, 00:19:24.448 "zerocopy_threshold": 0, 00:19:24.448 "tls_version": 0, 00:19:24.448 "enable_ktls": false 00:19:24.448 } 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "method": "sock_impl_set_options", 00:19:24.448 "params": { 00:19:24.448 "impl_name": "posix", 00:19:24.448 "recv_buf_size": 2097152, 00:19:24.448 "send_buf_size": 2097152, 00:19:24.448 "enable_recv_pipe": true, 00:19:24.448 "enable_quickack": false, 00:19:24.448 "enable_placement_id": 0, 00:19:24.448 "enable_zerocopy_send_server": true, 00:19:24.448 "enable_zerocopy_send_client": false, 00:19:24.448 "zerocopy_threshold": 0, 00:19:24.448 "tls_version": 0, 00:19:24.448 "enable_ktls": false 00:19:24.448 } 00:19:24.448 } 00:19:24.448 ] 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "subsystem": "vmd", 00:19:24.448 "config": [] 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "subsystem": "accel", 00:19:24.448 "config": [ 00:19:24.448 { 00:19:24.448 "method": "accel_set_options", 00:19:24.448 "params": { 00:19:24.448 "small_cache_size": 128, 00:19:24.448 "large_cache_size": 16, 00:19:24.448 "task_count": 2048, 00:19:24.448 "sequence_count": 2048, 00:19:24.448 "buf_count": 2048 00:19:24.448 } 00:19:24.448 } 00:19:24.448 ] 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "subsystem": "bdev", 00:19:24.448 "config": [ 00:19:24.448 { 00:19:24.448 "method": "bdev_set_options", 00:19:24.448 "params": { 00:19:24.448 "bdev_io_pool_size": 65535, 00:19:24.448 "bdev_io_cache_size": 256, 00:19:24.448 "bdev_auto_examine": true, 00:19:24.448 "iobuf_small_cache_size": 128, 00:19:24.448 "iobuf_large_cache_size": 16 00:19:24.448 } 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "method": "bdev_raid_set_options", 00:19:24.448 "params": { 00:19:24.448 "process_window_size_kb": 1024, 00:19:24.448 "process_max_bandwidth_mb_sec": 0 00:19:24.448 } 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "method": "bdev_iscsi_set_options", 00:19:24.448 "params": { 00:19:24.448 "timeout_sec": 30 00:19:24.448 } 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "method": "bdev_nvme_set_options", 00:19:24.448 "params": { 00:19:24.448 "action_on_timeout": "none", 00:19:24.448 "timeout_us": 0, 00:19:24.448 "timeout_admin_us": 0, 00:19:24.448 "keep_alive_timeout_ms": 10000, 00:19:24.448 "arbitration_burst": 0, 00:19:24.448 "low_priority_weight": 0, 00:19:24.448 "medium_priority_weight": 0, 00:19:24.448 "high_priority_weight": 0, 00:19:24.448 "nvme_adminq_poll_period_us": 10000, 00:19:24.448 "nvme_ioq_poll_period_us": 0, 00:19:24.448 "io_queue_requests": 0, 00:19:24.448 "delay_cmd_submit": true, 00:19:24.448 "transport_retry_count": 4, 00:19:24.448 "bdev_retry_count": 3, 00:19:24.448 "transport_ack_timeout": 0, 00:19:24.448 "ctrlr_loss_timeout_sec": 0, 00:19:24.448 "reconnect_delay_sec": 0, 00:19:24.448 "fast_io_fail_timeout_sec": 0, 00:19:24.448 "disable_auto_failback": false, 00:19:24.448 "generate_uuids": false, 00:19:24.448 "transport_tos": 0, 00:19:24.448 "nvme_error_stat": false, 00:19:24.448 "rdma_srq_size": 0, 00:19:24.448 "io_path_stat": false, 00:19:24.448 "allow_accel_sequence": false, 00:19:24.448 "rdma_max_cq_size": 0, 00:19:24.448 "rdma_cm_event_timeout_ms": 0, 00:19:24.448 "dhchap_digests": [ 00:19:24.448 "sha256", 00:19:24.448 "sha384", 00:19:24.448 "sha512" 00:19:24.448 ], 00:19:24.448 "dhchap_dhgroups": [ 00:19:24.448 "null", 00:19:24.448 "ffdhe2048", 00:19:24.448 "ffdhe3072", 00:19:24.448 "ffdhe4096", 00:19:24.448 "ffdhe6144", 00:19:24.448 "ffdhe8192" 00:19:24.448 ] 00:19:24.448 } 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "method": "bdev_nvme_set_hotplug", 00:19:24.448 "params": { 00:19:24.448 "period_us": 100000, 00:19:24.448 "enable": false 00:19:24.448 } 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "method": "bdev_malloc_create", 00:19:24.448 "params": { 00:19:24.448 "name": "malloc0", 00:19:24.448 "num_blocks": 8192, 00:19:24.448 "block_size": 4096, 00:19:24.448 "physical_block_size": 4096, 00:19:24.448 "uuid": "59b497f0-3292-45a7-b4c5-5ccab1218f33", 00:19:24.448 "optimal_io_boundary": 0, 00:19:24.448 "md_size": 0, 00:19:24.448 "dif_type": 0, 00:19:24.448 "dif_is_head_of_md": false, 00:19:24.448 "dif_pi_format": 0 00:19:24.448 } 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "method": "bdev_wait_for_examine" 00:19:24.448 } 00:19:24.448 ] 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "subsystem": "nbd", 00:19:24.448 "config": [] 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "subsystem": "scheduler", 00:19:24.448 "config": [ 00:19:24.448 { 00:19:24.448 "method": "framework_set_scheduler", 00:19:24.448 "params": { 00:19:24.448 "name": "static" 00:19:24.448 } 00:19:24.448 } 00:19:24.448 ] 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "subsystem": "nvmf", 00:19:24.448 "config": [ 00:19:24.448 { 00:19:24.448 "method": "nvmf_set_config", 00:19:24.448 "params": { 00:19:24.448 "discovery_filter": "match_any", 00:19:24.448 "admin_cmd_passthru": { 00:19:24.448 "identify_ctrlr": false 00:19:24.448 }, 00:19:24.448 "dhchap_digests": [ 00:19:24.448 "sha256", 00:19:24.448 "sha384", 00:19:24.448 "sha512" 00:19:24.448 ], 00:19:24.448 "dhchap_dhgroups": [ 00:19:24.448 "null", 00:19:24.448 "ffdhe2048", 00:19:24.448 "ffdhe3072", 00:19:24.448 "ffdhe4096", 00:19:24.448 "ffdhe6144", 00:19:24.448 "ffdhe8192" 00:19:24.448 ] 00:19:24.448 } 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "method": "nvmf_set_max_subsystems", 00:19:24.448 "params": { 00:19:24.448 "max_subsystems": 1024 00:19:24.448 } 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "method": "nvmf_set_crdt", 00:19:24.448 "params": { 00:19:24.448 "crdt1": 0, 00:19:24.448 "crdt2": 0, 00:19:24.448 "crdt3": 0 00:19:24.448 } 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "method": "nvmf_create_transport", 00:19:24.448 "params": { 00:19:24.448 "trtype": "TCP", 00:19:24.448 "max_queue_depth": 128, 00:19:24.448 "max_io_qpairs_per_ctrlr": 127, 00:19:24.448 "in_capsule_data_size": 4096, 00:19:24.448 "max_io_size": 131072, 00:19:24.448 "io_unit_size": 131072, 00:19:24.448 "max_aq_depth": 128, 00:19:24.448 "num_shared_buffers": 511, 00:19:24.448 "buf_cache_size": 4294967295, 00:19:24.448 "dif_insert_or_strip": false, 00:19:24.448 "zcopy": false, 00:19:24.448 "c2h_success": false, 00:19:24.448 "sock_priority": 0, 00:19:24.448 "abort_timeout_sec": 1, 00:19:24.448 "ack_timeout": 0, 00:19:24.448 "data_wr_pool_size": 0 00:19:24.448 } 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "method": "nvmf_create_subsystem", 00:19:24.448 "params": { 00:19:24.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.448 "allow_any_host": false, 00:19:24.448 "serial_number": "00000000000000000000", 00:19:24.448 "model_number": "SPDK bdev Controller", 00:19:24.448 "max_namespaces": 32, 00:19:24.448 "min_cntlid": 1, 00:19:24.448 "max_cntlid": 65519, 00:19:24.448 "ana_reporting": false 00:19:24.448 } 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "method": "nvmf_subsystem_add_host", 00:19:24.448 "params": { 00:19:24.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.448 "host": "nqn.2016-06.io.spdk:host1", 00:19:24.448 "psk": "key0" 00:19:24.448 } 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "method": "nvmf_subsystem_add_ns", 00:19:24.448 "params": { 00:19:24.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.448 "namespace": { 00:19:24.448 "nsid": 1, 00:19:24.448 "bdev_name": "malloc0", 00:19:24.448 "nguid": "59B497F0329245A7B4C55CCAB1218F33", 00:19:24.448 "uuid": "59b497f0-3292-45a7-b4c5-5ccab1218f33", 00:19:24.448 "no_auto_visible": false 00:19:24.448 } 00:19:24.448 } 00:19:24.448 }, 00:19:24.448 { 00:19:24.448 "method": "nvmf_subsystem_add_listener", 00:19:24.448 "params": { 00:19:24.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.448 "listen_address": { 00:19:24.448 "trtype": "TCP", 00:19:24.448 "adrfam": "IPv4", 00:19:24.448 "traddr": "10.0.0.2", 00:19:24.448 "trsvcid": "4420" 00:19:24.448 }, 00:19:24.448 "secure_channel": false, 00:19:24.448 "sock_impl": "ssl" 00:19:24.448 } 00:19:24.448 } 00:19:24.448 ] 00:19:24.448 } 00:19:24.448 ] 00:19:24.448 }' 00:19:24.448 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:24.706 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:24.706 "subsystems": [ 00:19:24.706 { 00:19:24.706 "subsystem": "keyring", 00:19:24.706 "config": [ 00:19:24.706 { 00:19:24.706 "method": "keyring_file_add_key", 00:19:24.706 "params": { 00:19:24.706 "name": "key0", 00:19:24.706 "path": "/tmp/tmp.XCh5ManDQo" 00:19:24.706 } 00:19:24.706 } 00:19:24.706 ] 00:19:24.706 }, 00:19:24.706 { 00:19:24.706 "subsystem": "iobuf", 00:19:24.706 "config": [ 00:19:24.706 { 00:19:24.706 "method": "iobuf_set_options", 00:19:24.706 "params": { 00:19:24.706 "small_pool_count": 8192, 00:19:24.706 "large_pool_count": 1024, 00:19:24.706 "small_bufsize": 8192, 00:19:24.706 "large_bufsize": 135168, 00:19:24.706 "enable_numa": false 00:19:24.706 } 00:19:24.706 } 00:19:24.706 ] 00:19:24.706 }, 00:19:24.706 { 00:19:24.706 "subsystem": "sock", 00:19:24.706 "config": [ 00:19:24.706 { 00:19:24.706 "method": "sock_set_default_impl", 00:19:24.706 "params": { 00:19:24.706 "impl_name": "posix" 00:19:24.706 } 00:19:24.706 }, 00:19:24.706 { 00:19:24.706 "method": "sock_impl_set_options", 00:19:24.706 "params": { 00:19:24.706 "impl_name": "ssl", 00:19:24.706 "recv_buf_size": 4096, 00:19:24.706 "send_buf_size": 4096, 00:19:24.706 "enable_recv_pipe": true, 00:19:24.706 "enable_quickack": false, 00:19:24.706 "enable_placement_id": 0, 00:19:24.706 "enable_zerocopy_send_server": true, 00:19:24.706 "enable_zerocopy_send_client": false, 00:19:24.706 "zerocopy_threshold": 0, 00:19:24.706 "tls_version": 0, 00:19:24.706 "enable_ktls": false 00:19:24.706 } 00:19:24.706 }, 00:19:24.706 { 00:19:24.706 "method": "sock_impl_set_options", 00:19:24.706 "params": { 00:19:24.706 "impl_name": "posix", 00:19:24.706 "recv_buf_size": 2097152, 00:19:24.706 "send_buf_size": 2097152, 00:19:24.706 "enable_recv_pipe": true, 00:19:24.706 "enable_quickack": false, 00:19:24.706 "enable_placement_id": 0, 00:19:24.706 "enable_zerocopy_send_server": true, 00:19:24.706 "enable_zerocopy_send_client": false, 00:19:24.706 "zerocopy_threshold": 0, 00:19:24.706 "tls_version": 0, 00:19:24.706 "enable_ktls": false 00:19:24.706 } 00:19:24.706 } 00:19:24.706 ] 00:19:24.706 }, 00:19:24.706 { 00:19:24.706 "subsystem": "vmd", 00:19:24.706 "config": [] 00:19:24.706 }, 00:19:24.706 { 00:19:24.706 "subsystem": "accel", 00:19:24.706 "config": [ 00:19:24.706 { 00:19:24.706 "method": "accel_set_options", 00:19:24.706 "params": { 00:19:24.706 "small_cache_size": 128, 00:19:24.706 "large_cache_size": 16, 00:19:24.706 "task_count": 2048, 00:19:24.707 "sequence_count": 2048, 00:19:24.707 "buf_count": 2048 00:19:24.707 } 00:19:24.707 } 00:19:24.707 ] 00:19:24.707 }, 00:19:24.707 { 00:19:24.707 "subsystem": "bdev", 00:19:24.707 "config": [ 00:19:24.707 { 00:19:24.707 "method": "bdev_set_options", 00:19:24.707 "params": { 00:19:24.707 "bdev_io_pool_size": 65535, 00:19:24.707 "bdev_io_cache_size": 256, 00:19:24.707 "bdev_auto_examine": true, 00:19:24.707 "iobuf_small_cache_size": 128, 00:19:24.707 "iobuf_large_cache_size": 16 00:19:24.707 } 00:19:24.707 }, 00:19:24.707 { 00:19:24.707 "method": "bdev_raid_set_options", 00:19:24.707 "params": { 00:19:24.707 "process_window_size_kb": 1024, 00:19:24.707 "process_max_bandwidth_mb_sec": 0 00:19:24.707 } 00:19:24.707 }, 00:19:24.707 { 00:19:24.707 "method": "bdev_iscsi_set_options", 00:19:24.707 "params": { 00:19:24.707 "timeout_sec": 30 00:19:24.707 } 00:19:24.707 }, 00:19:24.707 { 00:19:24.707 "method": "bdev_nvme_set_options", 00:19:24.707 "params": { 00:19:24.707 "action_on_timeout": "none", 00:19:24.707 "timeout_us": 0, 00:19:24.707 "timeout_admin_us": 0, 00:19:24.707 "keep_alive_timeout_ms": 10000, 00:19:24.707 "arbitration_burst": 0, 00:19:24.707 "low_priority_weight": 0, 00:19:24.707 "medium_priority_weight": 0, 00:19:24.707 "high_priority_weight": 0, 00:19:24.707 "nvme_adminq_poll_period_us": 10000, 00:19:24.707 "nvme_ioq_poll_period_us": 0, 00:19:24.707 "io_queue_requests": 512, 00:19:24.707 "delay_cmd_submit": true, 00:19:24.707 "transport_retry_count": 4, 00:19:24.707 "bdev_retry_count": 3, 00:19:24.707 "transport_ack_timeout": 0, 00:19:24.707 "ctrlr_loss_timeout_sec": 0, 00:19:24.707 "reconnect_delay_sec": 0, 00:19:24.707 "fast_io_fail_timeout_sec": 0, 00:19:24.707 "disable_auto_failback": false, 00:19:24.707 "generate_uuids": false, 00:19:24.707 "transport_tos": 0, 00:19:24.707 "nvme_error_stat": false, 00:19:24.707 "rdma_srq_size": 0, 00:19:24.707 "io_path_stat": false, 00:19:24.707 "allow_accel_sequence": false, 00:19:24.707 "rdma_max_cq_size": 0, 00:19:24.707 "rdma_cm_event_timeout_ms": 0, 00:19:24.707 "dhchap_digests": [ 00:19:24.707 "sha256", 00:19:24.707 "sha384", 00:19:24.707 "sha512" 00:19:24.707 ], 00:19:24.707 "dhchap_dhgroups": [ 00:19:24.707 "null", 00:19:24.707 "ffdhe2048", 00:19:24.707 "ffdhe3072", 00:19:24.707 "ffdhe4096", 00:19:24.707 "ffdhe6144", 00:19:24.707 "ffdhe8192" 00:19:24.707 ] 00:19:24.707 } 00:19:24.707 }, 00:19:24.707 { 00:19:24.707 "method": "bdev_nvme_attach_controller", 00:19:24.707 "params": { 00:19:24.707 "name": "nvme0", 00:19:24.707 "trtype": "TCP", 00:19:24.707 "adrfam": "IPv4", 00:19:24.707 "traddr": "10.0.0.2", 00:19:24.707 "trsvcid": "4420", 00:19:24.707 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.707 "prchk_reftag": false, 00:19:24.707 "prchk_guard": false, 00:19:24.707 "ctrlr_loss_timeout_sec": 0, 00:19:24.707 "reconnect_delay_sec": 0, 00:19:24.707 "fast_io_fail_timeout_sec": 0, 00:19:24.707 "psk": "key0", 00:19:24.707 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.707 "hdgst": false, 00:19:24.707 "ddgst": false, 00:19:24.707 "multipath": "multipath" 00:19:24.707 } 00:19:24.707 }, 00:19:24.707 { 00:19:24.707 "method": "bdev_nvme_set_hotplug", 00:19:24.707 "params": { 00:19:24.707 "period_us": 100000, 00:19:24.707 "enable": false 00:19:24.707 } 00:19:24.707 }, 00:19:24.707 { 00:19:24.707 "method": "bdev_enable_histogram", 00:19:24.707 "params": { 00:19:24.707 "name": "nvme0n1", 00:19:24.707 "enable": true 00:19:24.707 } 00:19:24.707 }, 00:19:24.707 { 00:19:24.707 "method": "bdev_wait_for_examine" 00:19:24.707 } 00:19:24.707 ] 00:19:24.707 }, 00:19:24.707 { 00:19:24.707 "subsystem": "nbd", 00:19:24.707 "config": [] 00:19:24.707 } 00:19:24.707 ] 00:19:24.707 }' 00:19:24.707 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2473250 00:19:24.707 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2473250 ']' 00:19:24.707 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2473250 00:19:24.707 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:24.707 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.707 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2473250 00:19:24.707 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:24.707 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:24.707 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2473250' 00:19:24.707 killing process with pid 2473250 00:19:24.707 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2473250 00:19:24.707 Received shutdown signal, test time was about 1.000000 seconds 00:19:24.707 00:19:24.707 Latency(us) 00:19:24.707 [2024-11-27T07:02:18.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.707 [2024-11-27T07:02:18.816Z] =================================================================================================================== 00:19:24.707 [2024-11-27T07:02:18.816Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:24.707 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2473250 00:19:24.965 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2473064 00:19:24.965 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2473064 ']' 00:19:24.965 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2473064 00:19:24.965 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:24.965 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.965 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2473064 00:19:24.965 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.965 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.965 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2473064' 00:19:24.965 killing process with pid 2473064 00:19:24.965 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2473064 00:19:24.965 08:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2473064 00:19:24.965 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:24.965 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:24.965 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:24.965 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:24.965 "subsystems": [ 00:19:24.965 { 00:19:24.965 "subsystem": "keyring", 00:19:24.965 "config": [ 00:19:24.965 { 00:19:24.965 "method": "keyring_file_add_key", 00:19:24.965 "params": { 00:19:24.965 "name": "key0", 00:19:24.965 "path": "/tmp/tmp.XCh5ManDQo" 00:19:24.965 } 00:19:24.965 } 00:19:24.965 ] 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "subsystem": "iobuf", 00:19:24.965 "config": [ 00:19:24.965 { 00:19:24.965 "method": "iobuf_set_options", 00:19:24.965 "params": { 00:19:24.965 "small_pool_count": 8192, 00:19:24.965 "large_pool_count": 1024, 00:19:24.965 "small_bufsize": 8192, 00:19:24.965 "large_bufsize": 135168, 00:19:24.965 "enable_numa": false 00:19:24.965 } 00:19:24.965 } 00:19:24.965 ] 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "subsystem": "sock", 00:19:24.965 "config": [ 00:19:24.965 { 00:19:24.965 "method": "sock_set_default_impl", 00:19:24.965 "params": { 00:19:24.965 "impl_name": "posix" 00:19:24.965 } 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "method": "sock_impl_set_options", 00:19:24.965 "params": { 00:19:24.965 "impl_name": "ssl", 00:19:24.965 "recv_buf_size": 4096, 00:19:24.965 "send_buf_size": 4096, 00:19:24.965 "enable_recv_pipe": true, 00:19:24.965 "enable_quickack": false, 00:19:24.965 "enable_placement_id": 0, 00:19:24.965 "enable_zerocopy_send_server": true, 00:19:24.965 "enable_zerocopy_send_client": false, 00:19:24.965 "zerocopy_threshold": 0, 00:19:24.965 "tls_version": 0, 00:19:24.965 "enable_ktls": false 00:19:24.965 } 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "method": "sock_impl_set_options", 00:19:24.965 "params": { 00:19:24.965 "impl_name": "posix", 00:19:24.965 "recv_buf_size": 2097152, 00:19:24.965 "send_buf_size": 2097152, 00:19:24.965 "enable_recv_pipe": true, 00:19:24.965 "enable_quickack": false, 00:19:24.965 "enable_placement_id": 0, 00:19:24.965 "enable_zerocopy_send_server": true, 00:19:24.965 "enable_zerocopy_send_client": false, 00:19:24.965 "zerocopy_threshold": 0, 00:19:24.965 "tls_version": 0, 00:19:24.965 "enable_ktls": false 00:19:24.965 } 00:19:24.965 } 00:19:24.965 ] 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "subsystem": "vmd", 00:19:24.965 "config": [] 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "subsystem": "accel", 00:19:24.965 "config": [ 00:19:24.965 { 00:19:24.965 "method": "accel_set_options", 00:19:24.965 "params": { 00:19:24.965 "small_cache_size": 128, 00:19:24.965 "large_cache_size": 16, 00:19:24.965 "task_count": 2048, 00:19:24.965 "sequence_count": 2048, 00:19:24.965 "buf_count": 2048 00:19:24.965 } 00:19:24.965 } 00:19:24.965 ] 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "subsystem": "bdev", 00:19:24.965 "config": [ 00:19:24.965 { 00:19:24.965 "method": "bdev_set_options", 00:19:24.965 "params": { 00:19:24.965 "bdev_io_pool_size": 65535, 00:19:24.965 "bdev_io_cache_size": 256, 00:19:24.965 "bdev_auto_examine": true, 00:19:24.965 "iobuf_small_cache_size": 128, 00:19:24.965 "iobuf_large_cache_size": 16 00:19:24.965 } 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "method": "bdev_raid_set_options", 00:19:24.965 "params": { 00:19:24.965 "process_window_size_kb": 1024, 00:19:24.965 "process_max_bandwidth_mb_sec": 0 00:19:24.965 } 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "method": "bdev_iscsi_set_options", 00:19:24.965 "params": { 00:19:24.965 "timeout_sec": 30 00:19:24.965 } 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "method": "bdev_nvme_set_options", 00:19:24.965 "params": { 00:19:24.965 "action_on_timeout": "none", 00:19:24.965 "timeout_us": 0, 00:19:24.965 "timeout_admin_us": 0, 00:19:24.965 "keep_alive_timeout_ms": 10000, 00:19:24.965 "arbitration_burst": 0, 00:19:24.965 "low_priority_weight": 0, 00:19:24.965 "medium_priority_weight": 0, 00:19:24.965 "high_priority_weight": 0, 00:19:24.965 "nvme_adminq_poll_period_us": 10000, 00:19:24.965 "nvme_ioq_poll_period_us": 0, 00:19:24.965 "io_queue_requests": 0, 00:19:24.965 "delay_cmd_submit": true, 00:19:24.965 "transport_retry_count": 4, 00:19:24.965 "bdev_retry_count": 3, 00:19:24.965 "transport_ack_timeout": 0, 00:19:24.965 "ctrlr_loss_timeout_sec": 0, 00:19:24.965 "reconnect_delay_sec": 0, 00:19:24.965 "fast_io_fail_timeout_sec": 0, 00:19:24.965 "disable_auto_failback": false, 00:19:24.965 "generate_uuids": false, 00:19:24.965 "transport_tos": 0, 00:19:24.965 "nvme_error_stat": false, 00:19:24.965 "rdma_srq_size": 0, 00:19:24.965 "io_path_stat": false, 00:19:24.965 "allow_accel_sequence": false, 00:19:24.965 "rdma_max_cq_size": 0, 00:19:24.965 "rdma_cm_event_timeout_ms": 0, 00:19:24.965 "dhchap_digests": [ 00:19:24.965 "sha256", 00:19:24.965 "sha384", 00:19:24.965 "sha512" 00:19:24.965 ], 00:19:24.965 "dhchap_dhgroups": [ 00:19:24.965 "null", 00:19:24.965 "ffdhe2048", 00:19:24.965 "ffdhe3072", 00:19:24.965 "ffdhe4096", 00:19:24.965 "ffdhe6144", 00:19:24.965 "ffdhe8192" 00:19:24.965 ] 00:19:24.965 } 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "method": "bdev_nvme_set_hotplug", 00:19:24.965 "params": { 00:19:24.965 "period_us": 100000, 00:19:24.965 "enable": false 00:19:24.965 } 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "method": "bdev_malloc_create", 00:19:24.965 "params": { 00:19:24.965 "name": "malloc0", 00:19:24.965 "num_blocks": 8192, 00:19:24.965 "block_size": 4096, 00:19:24.965 "physical_block_size": 4096, 00:19:24.965 "uuid": "59b497f0-3292-45a7-b4c5-5ccab1218f33", 00:19:24.965 "optimal_io_boundary": 0, 00:19:24.965 "md_size": 0, 00:19:24.965 "dif_type": 0, 00:19:24.965 "dif_is_head_of_md": false, 00:19:24.965 "dif_pi_format": 0 00:19:24.965 } 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "method": "bdev_wait_for_examine" 00:19:24.965 } 00:19:24.965 ] 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "subsystem": "nbd", 00:19:24.965 "config": [] 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "subsystem": "scheduler", 00:19:24.965 "config": [ 00:19:24.965 { 00:19:24.965 "method": "framework_set_scheduler", 00:19:24.965 "params": { 00:19:24.965 "name": "static" 00:19:24.965 } 00:19:24.965 } 00:19:24.965 ] 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "subsystem": "nvmf", 00:19:24.965 "config": [ 00:19:24.965 { 00:19:24.965 "method": "nvmf_set_config", 00:19:24.965 "params": { 00:19:24.965 "discovery_filter": "match_any", 00:19:24.965 "admin_cmd_passthru": { 00:19:24.965 "identify_ctrlr": false 00:19:24.965 }, 00:19:24.965 "dhchap_digests": [ 00:19:24.965 "sha256", 00:19:24.965 "sha384", 00:19:24.965 "sha512" 00:19:24.965 ], 00:19:24.965 "dhchap_dhgroups": [ 00:19:24.965 "null", 00:19:24.965 "ffdhe2048", 00:19:24.965 "ffdhe3072", 00:19:24.965 "ffdhe4096", 00:19:24.965 "ffdhe6144", 00:19:24.965 "ffdhe8192" 00:19:24.965 ] 00:19:24.965 } 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "method": "nvmf_set_max_subsystems", 00:19:24.965 "params": { 00:19:24.965 "max_subsystems": 1024 00:19:24.965 } 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "method": "nvmf_set_crdt", 00:19:24.965 "params": { 00:19:24.965 "crdt1": 0, 00:19:24.965 "crdt2": 0, 00:19:24.965 "crdt3": 0 00:19:24.965 } 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "method": "nvmf_create_transport", 00:19:24.965 "params": { 00:19:24.965 "trtype": "TCP", 00:19:24.965 "max_queue_depth": 128, 00:19:24.965 "max_io_qpairs_per_ctrlr": 127, 00:19:24.965 "in_capsule_data_size": 4096, 00:19:24.965 "max_io_size": 131072, 00:19:24.965 "io_unit_size": 131072, 00:19:24.965 "max_aq_depth": 128, 00:19:24.965 "num_shared_buffers": 511, 00:19:24.965 "buf_cache_size": 4294967295, 00:19:24.965 "dif_insert_or_strip": false, 00:19:24.965 "zcopy": false, 00:19:24.965 "c2h_success": false, 00:19:24.965 "sock_priority": 0, 00:19:24.965 "abort_timeout_sec": 1, 00:19:24.965 "ack_timeout": 0, 00:19:24.965 "data_wr_pool_size": 0 00:19:24.965 } 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "method": "nvmf_create_subsystem", 00:19:24.965 "params": { 00:19:24.965 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.965 "allow_any_host": false, 00:19:24.965 "serial_number": "00000000000000000000", 00:19:24.965 "model_number": "SPDK bdev Controller", 00:19:24.965 "max_namespaces": 32, 00:19:24.965 "min_cntlid": 1, 00:19:24.965 "max_cntlid": 65519, 00:19:24.965 "ana_reporting": false 00:19:24.965 } 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "method": "nvmf_subsystem_add_host", 00:19:24.965 "params": { 00:19:24.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.966 "host": "nqn.2016-06.io.spdk:host1", 00:19:24.966 "psk": "key0" 00:19:24.966 } 00:19:24.966 }, 00:19:24.966 { 00:19:24.966 "method": "nvmf_subsystem_add_ns", 00:19:24.966 "params": { 00:19:24.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.966 "namespace": { 00:19:24.966 "nsid": 1, 00:19:24.966 "bdev_name": "malloc0", 00:19:24.966 "nguid": "59B497F0329245A7B4C55CCAB1218F33", 00:19:24.966 "uuid": "59b497f0-3292-45a7-b4c5-5ccab1218f33", 00:19:24.966 "no_auto_visible": false 00:19:24.966 } 00:19:24.966 } 00:19:24.966 }, 00:19:24.966 { 00:19:24.966 "method": "nvmf_subsystem_add_listener", 00:19:24.966 "params": { 00:19:24.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.966 "listen_address": { 00:19:24.966 "trtype": "TCP", 00:19:24.966 "adrfam": "IPv4", 00:19:24.966 "traddr": "10.0.0.2", 00:19:24.966 "trsvcid": "4420" 00:19:24.966 }, 00:19:24.966 "secure_channel": false, 00:19:24.966 "sock_impl": "ssl" 00:19:24.966 } 00:19:24.966 } 00:19:24.966 ] 00:19:24.966 } 00:19:24.966 ] 00:19:24.966 }' 00:19:24.966 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.966 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2473612 00:19:24.966 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2473612 00:19:24.966 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:24.966 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2473612 ']' 00:19:24.966 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.966 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.966 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.966 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.966 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.223 [2024-11-27 08:02:19.109603] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:19:25.224 [2024-11-27 08:02:19.109652] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.224 [2024-11-27 08:02:19.175422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.224 [2024-11-27 08:02:19.216350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.224 [2024-11-27 08:02:19.216389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.224 [2024-11-27 08:02:19.216396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.224 [2024-11-27 08:02:19.216402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.224 [2024-11-27 08:02:19.216407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.224 [2024-11-27 08:02:19.217043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.519 [2024-11-27 08:02:19.432290] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.519 [2024-11-27 08:02:19.464321] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:25.519 [2024-11-27 08:02:19.464559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.082 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.082 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:26.082 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:26.082 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:26.082 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.082 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.083 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2473753 00:19:26.083 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2473753 /var/tmp/bdevperf.sock 00:19:26.083 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2473753 ']' 00:19:26.083 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.083 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:26.083 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.083 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.083 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:26.083 "subsystems": [ 00:19:26.083 { 00:19:26.083 "subsystem": "keyring", 00:19:26.083 "config": [ 00:19:26.083 { 00:19:26.083 "method": "keyring_file_add_key", 00:19:26.083 "params": { 00:19:26.083 "name": "key0", 00:19:26.083 "path": "/tmp/tmp.XCh5ManDQo" 00:19:26.083 } 00:19:26.083 } 00:19:26.083 ] 00:19:26.083 }, 00:19:26.083 { 00:19:26.083 "subsystem": "iobuf", 00:19:26.083 "config": [ 00:19:26.083 { 00:19:26.083 "method": "iobuf_set_options", 00:19:26.083 "params": { 00:19:26.083 "small_pool_count": 8192, 00:19:26.083 "large_pool_count": 1024, 00:19:26.083 "small_bufsize": 8192, 00:19:26.083 "large_bufsize": 135168, 00:19:26.083 "enable_numa": false 00:19:26.083 } 00:19:26.083 } 00:19:26.083 ] 00:19:26.083 }, 00:19:26.083 { 00:19:26.083 "subsystem": "sock", 00:19:26.083 "config": [ 00:19:26.083 { 00:19:26.083 "method": "sock_set_default_impl", 00:19:26.083 "params": { 00:19:26.083 "impl_name": "posix" 00:19:26.083 } 00:19:26.083 }, 00:19:26.083 { 00:19:26.083 "method": "sock_impl_set_options", 00:19:26.083 "params": { 00:19:26.083 "impl_name": "ssl", 00:19:26.083 "recv_buf_size": 4096, 00:19:26.083 "send_buf_size": 4096, 00:19:26.083 "enable_recv_pipe": true, 00:19:26.083 "enable_quickack": false, 00:19:26.083 "enable_placement_id": 0, 00:19:26.083 "enable_zerocopy_send_server": true, 00:19:26.083 "enable_zerocopy_send_client": false, 00:19:26.083 "zerocopy_threshold": 0, 00:19:26.083 "tls_version": 0, 00:19:26.083 "enable_ktls": false 00:19:26.083 } 00:19:26.083 }, 00:19:26.083 { 00:19:26.083 "method": "sock_impl_set_options", 00:19:26.083 "params": { 00:19:26.083 "impl_name": "posix", 00:19:26.083 "recv_buf_size": 2097152, 00:19:26.083 "send_buf_size": 2097152, 00:19:26.083 "enable_recv_pipe": true, 00:19:26.083 "enable_quickack": false, 00:19:26.083 "enable_placement_id": 0, 00:19:26.083 "enable_zerocopy_send_server": true, 00:19:26.083 "enable_zerocopy_send_client": false, 00:19:26.083 "zerocopy_threshold": 0, 00:19:26.083 "tls_version": 0, 00:19:26.083 "enable_ktls": false 00:19:26.083 } 00:19:26.083 } 00:19:26.083 ] 00:19:26.083 }, 00:19:26.083 { 00:19:26.083 "subsystem": "vmd", 00:19:26.083 "config": [] 00:19:26.083 }, 00:19:26.083 { 00:19:26.083 "subsystem": "accel", 00:19:26.083 "config": [ 00:19:26.083 { 00:19:26.083 "method": "accel_set_options", 00:19:26.083 "params": { 00:19:26.083 "small_cache_size": 128, 00:19:26.083 "large_cache_size": 16, 00:19:26.083 "task_count": 2048, 00:19:26.083 "sequence_count": 2048, 00:19:26.083 "buf_count": 2048 00:19:26.083 } 00:19:26.083 } 00:19:26.083 ] 00:19:26.083 }, 00:19:26.083 { 00:19:26.083 "subsystem": "bdev", 00:19:26.083 "config": [ 00:19:26.083 { 00:19:26.083 "method": "bdev_set_options", 00:19:26.083 "params": { 00:19:26.083 "bdev_io_pool_size": 65535, 00:19:26.083 "bdev_io_cache_size": 256, 00:19:26.083 "bdev_auto_examine": true, 00:19:26.083 "iobuf_small_cache_size": 128, 00:19:26.083 "iobuf_large_cache_size": 16 00:19:26.083 } 00:19:26.083 }, 00:19:26.083 { 00:19:26.083 "method": "bdev_raid_set_options", 00:19:26.083 "params": { 00:19:26.083 "process_window_size_kb": 1024, 00:19:26.083 "process_max_bandwidth_mb_sec": 0 00:19:26.083 } 00:19:26.083 }, 00:19:26.083 { 00:19:26.083 "method": "bdev_iscsi_set_options", 00:19:26.083 "params": { 00:19:26.083 "timeout_sec": 30 00:19:26.083 } 00:19:26.083 }, 00:19:26.083 { 00:19:26.083 "method": "bdev_nvme_set_options", 00:19:26.083 "params": { 00:19:26.083 "action_on_timeout": "none", 00:19:26.083 "timeout_us": 0, 00:19:26.083 "timeout_admin_us": 0, 00:19:26.083 "keep_alive_timeout_ms": 10000, 00:19:26.083 "arbitration_burst": 0, 00:19:26.083 "low_priority_weight": 0, 00:19:26.083 "medium_priority_weight": 0, 00:19:26.083 "high_priority_weight": 0, 00:19:26.083 "nvme_adminq_poll_period_us": 10000, 00:19:26.083 "nvme_ioq_poll_period_us": 0, 00:19:26.083 "io_queue_requests": 512, 00:19:26.083 "delay_cmd_submit": true, 00:19:26.083 "transport_retry_count": 4, 00:19:26.083 "bdev_retry_count": 3, 00:19:26.083 "transport_ack_timeout": 0, 00:19:26.083 "ctrlr_loss_timeout_sec": 0, 00:19:26.083 "reconnect_delay_sec": 0, 00:19:26.083 "fast_io_fail_timeout_sec": 0, 00:19:26.083 "disable_auto_failback": false, 00:19:26.083 "generate_uuids": false, 00:19:26.083 "transport_tos": 0, 00:19:26.083 "nvme_error_stat": false, 00:19:26.083 "rdma_srq_size": 0, 00:19:26.083 "io_path_stat": false, 00:19:26.083 "allow_accel_sequence": false, 00:19:26.083 "rdma_max_cq_size": 0, 00:19:26.083 "rdma_cm_event_timeout_ms": 0, 00:19:26.083 "dhchap_digests": [ 00:19:26.083 "sha256", 00:19:26.083 "sha384", 00:19:26.083 "sha512" 00:19:26.083 ], 00:19:26.083 "dhchap_dhgroups": [ 00:19:26.083 "null", 00:19:26.083 "ffdhe2048", 00:19:26.083 "ffdhe3072", 00:19:26.083 "ffdhe4096", 00:19:26.083 "ffdhe6144", 00:19:26.083 "ffdhe8192" 00:19:26.083 ] 00:19:26.083 } 00:19:26.083 }, 00:19:26.083 { 00:19:26.083 "method": "bdev_nvme_attach_controller", 00:19:26.083 "params": { 00:19:26.083 "name": "nvme0", 00:19:26.083 "trtype": "TCP", 00:19:26.083 "adrfam": "IPv4", 00:19:26.083 "traddr": "10.0.0.2", 00:19:26.083 "trsvcid": "4420", 00:19:26.083 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.083 "prchk_reftag": false, 00:19:26.083 "prchk_guard": false, 00:19:26.083 "ctrlr_loss_timeout_sec": 0, 00:19:26.083 "reconnect_delay_sec": 0, 00:19:26.083 "fast_io_fail_timeout_sec": 0, 00:19:26.083 "psk": "key0", 00:19:26.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:26.083 "hdgst": false, 00:19:26.083 "ddgst": false, 00:19:26.083 "multipath": "multipath" 00:19:26.083 } 00:19:26.083 }, 00:19:26.083 { 00:19:26.083 "method": "bdev_nvme_set_hotplug", 00:19:26.083 "params": { 00:19:26.083 "period_us": 100000, 00:19:26.083 "enable": false 00:19:26.083 } 00:19:26.083 }, 00:19:26.083 { 00:19:26.083 "method": "bdev_enable_histogram", 00:19:26.083 "params": { 00:19:26.083 "name": "nvme0n1", 00:19:26.083 "enable": true 00:19:26.083 } 00:19:26.083 }, 00:19:26.083 { 00:19:26.083 "method": "bdev_wait_for_examine" 00:19:26.083 } 00:19:26.083 ] 00:19:26.083 }, 00:19:26.083 { 00:19:26.083 "subsystem": "nbd", 00:19:26.083 "config": [] 00:19:26.083 } 00:19:26.083 ] 00:19:26.083 }' 00:19:26.083 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.083 08:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.083 [2024-11-27 08:02:20.024867] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:19:26.083 [2024-11-27 08:02:20.024917] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473753 ] 00:19:26.083 [2024-11-27 08:02:20.089683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.084 [2024-11-27 08:02:20.131307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.341 [2024-11-27 08:02:20.287118] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:26.905 08:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.905 08:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:26.905 08:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:26.905 08:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:27.163 08:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.163 08:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:27.163 Running I/O for 1 seconds... 00:19:28.200 5166.00 IOPS, 20.18 MiB/s 00:19:28.200 Latency(us) 00:19:28.200 [2024-11-27T07:02:22.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.200 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:28.200 Verification LBA range: start 0x0 length 0x2000 00:19:28.200 nvme0n1 : 1.02 5200.75 20.32 0.00 0.00 24414.20 7151.97 27468.13 00:19:28.200 [2024-11-27T07:02:22.309Z] =================================================================================================================== 00:19:28.200 [2024-11-27T07:02:22.309Z] Total : 5200.75 20.32 0.00 0.00 24414.20 7151.97 27468.13 00:19:28.200 { 00:19:28.200 "results": [ 00:19:28.200 { 00:19:28.200 "job": "nvme0n1", 00:19:28.200 "core_mask": "0x2", 00:19:28.200 "workload": "verify", 00:19:28.200 "status": "finished", 00:19:28.200 "verify_range": { 00:19:28.200 "start": 0, 00:19:28.200 "length": 8192 00:19:28.200 }, 00:19:28.200 "queue_depth": 128, 00:19:28.200 "io_size": 4096, 00:19:28.200 "runtime": 1.017931, 00:19:28.200 "iops": 5200.745433629588, 00:19:28.200 "mibps": 20.315411850115577, 00:19:28.200 "io_failed": 0, 00:19:28.200 "io_timeout": 0, 00:19:28.200 "avg_latency_us": 24414.197201754243, 00:19:28.200 "min_latency_us": 7151.972173913044, 00:19:28.200 "max_latency_us": 27468.132173913044 00:19:28.200 } 00:19:28.200 ], 00:19:28.200 "core_count": 1 00:19:28.200 } 00:19:28.200 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:28.200 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:28.200 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:28.200 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:28.200 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:28.200 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:28.200 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:28.200 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:28.200 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:28.200 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:28.200 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:28.200 nvmf_trace.0 00:19:28.200 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:28.200 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2473753 00:19:28.200 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2473753 ']' 00:19:28.200 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2473753 00:19:28.200 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:28.200 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.200 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2473753 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2473753' 00:19:28.484 killing process with pid 2473753 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2473753 00:19:28.484 Received shutdown signal, test time was about 1.000000 seconds 00:19:28.484 00:19:28.484 Latency(us) 00:19:28.484 [2024-11-27T07:02:22.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.484 [2024-11-27T07:02:22.593Z] =================================================================================================================== 00:19:28.484 [2024-11-27T07:02:22.593Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2473753 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:28.484 rmmod nvme_tcp 00:19:28.484 rmmod nvme_fabrics 00:19:28.484 rmmod nvme_keyring 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2473612 ']' 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2473612 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2473612 ']' 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2473612 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2473612 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2473612' 00:19:28.484 killing process with pid 2473612 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2473612 00:19:28.484 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2473612 00:19:28.742 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:28.742 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:28.742 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:28.742 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:28.742 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:28.742 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:28.742 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:28.742 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:28.742 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:28.742 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.742 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.742 08:02:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.277 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:31.277 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.um2p7SAPNT /tmp/tmp.9NEyvKLDis /tmp/tmp.XCh5ManDQo 00:19:31.277 00:19:31.277 real 1m18.699s 00:19:31.277 user 2m1.651s 00:19:31.277 sys 0m29.249s 00:19:31.277 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.277 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.277 ************************************ 00:19:31.277 END TEST nvmf_tls 00:19:31.277 ************************************ 00:19:31.277 08:02:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:31.277 08:02:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:31.277 08:02:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.277 08:02:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:31.278 ************************************ 00:19:31.278 START TEST nvmf_fips 00:19:31.278 ************************************ 00:19:31.278 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:31.278 * Looking for test storage... 00:19:31.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:31.278 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:31.278 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:19:31.278 08:02:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:31.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.278 --rc genhtml_branch_coverage=1 00:19:31.278 --rc genhtml_function_coverage=1 00:19:31.278 --rc genhtml_legend=1 00:19:31.278 --rc geninfo_all_blocks=1 00:19:31.278 --rc geninfo_unexecuted_blocks=1 00:19:31.278 00:19:31.278 ' 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:31.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.278 --rc genhtml_branch_coverage=1 00:19:31.278 --rc genhtml_function_coverage=1 00:19:31.278 --rc genhtml_legend=1 00:19:31.278 --rc geninfo_all_blocks=1 00:19:31.278 --rc geninfo_unexecuted_blocks=1 00:19:31.278 00:19:31.278 ' 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:31.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.278 --rc genhtml_branch_coverage=1 00:19:31.278 --rc genhtml_function_coverage=1 00:19:31.278 --rc genhtml_legend=1 00:19:31.278 --rc geninfo_all_blocks=1 00:19:31.278 --rc geninfo_unexecuted_blocks=1 00:19:31.278 00:19:31.278 ' 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:31.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.278 --rc genhtml_branch_coverage=1 00:19:31.278 --rc genhtml_function_coverage=1 00:19:31.278 --rc genhtml_legend=1 00:19:31.278 --rc geninfo_all_blocks=1 00:19:31.278 --rc geninfo_unexecuted_blocks=1 00:19:31.278 00:19:31.278 ' 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:31.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:31.278 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:31.279 Error setting digest 00:19:31.279 40A20389347F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:31.279 40A20389347F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:31.279 08:02:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:36.547 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:36.547 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:36.547 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:36.548 Found net devices under 0000:86:00.0: cvl_0_0 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:36.548 Found net devices under 0000:86:00.1: cvl_0_1 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:36.548 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:36.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:19:36.806 00:19:36.806 --- 10.0.0.2 ping statistics --- 00:19:36.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.806 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:36.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:19:36.806 00:19:36.806 --- 10.0.0.1 ping statistics --- 00:19:36.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.806 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2477775 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2477775 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2477775 ']' 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.806 08:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:36.806 [2024-11-27 08:02:30.826055] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:19:36.806 [2024-11-27 08:02:30.826108] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.806 [2024-11-27 08:02:30.894022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.064 [2024-11-27 08:02:30.933821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.064 [2024-11-27 08:02:30.933856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.064 [2024-11-27 08:02:30.933863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.064 [2024-11-27 08:02:30.933869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.064 [2024-11-27 08:02:30.933874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.064 [2024-11-27 08:02:30.934461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.684 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.684 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:37.684 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:37.684 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:37.684 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:37.684 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.684 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:37.684 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:37.684 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:37.684 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.LwL 00:19:37.684 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:37.684 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.LwL 00:19:37.684 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.LwL 00:19:37.684 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.LwL 00:19:37.684 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:37.942 [2024-11-27 08:02:31.856431] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.942 [2024-11-27 08:02:31.872436] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:37.942 [2024-11-27 08:02:31.872663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.942 malloc0 00:19:37.942 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:37.942 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2477998 00:19:37.942 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:37.942 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2477998 /var/tmp/bdevperf.sock 00:19:37.942 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2477998 ']' 00:19:37.942 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.942 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.942 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.942 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.942 08:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:37.942 [2024-11-27 08:02:31.992817] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:19:37.942 [2024-11-27 08:02:31.992871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477998 ] 00:19:38.200 [2024-11-27 08:02:32.051376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.200 [2024-11-27 08:02:32.092082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.200 08:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:38.200 08:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:38.200 08:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.LwL 00:19:38.458 08:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:38.458 [2024-11-27 08:02:32.536892] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:38.716 TLSTESTn1 00:19:38.716 08:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:38.716 Running I/O for 10 seconds... 00:19:41.022 5300.00 IOPS, 20.70 MiB/s [2024-11-27T07:02:36.064Z] 5376.00 IOPS, 21.00 MiB/s [2024-11-27T07:02:36.997Z] 5375.00 IOPS, 21.00 MiB/s [2024-11-27T07:02:37.950Z] 5402.75 IOPS, 21.10 MiB/s [2024-11-27T07:02:38.885Z] 5454.40 IOPS, 21.31 MiB/s [2024-11-27T07:02:39.817Z] 5457.00 IOPS, 21.32 MiB/s [2024-11-27T07:02:40.751Z] 5300.71 IOPS, 20.71 MiB/s [2024-11-27T07:02:42.124Z] 5102.62 IOPS, 19.93 MiB/s [2024-11-27T07:02:43.059Z] 4958.00 IOPS, 19.37 MiB/s [2024-11-27T07:02:43.059Z] 4836.10 IOPS, 18.89 MiB/s 00:19:48.950 Latency(us) 00:19:48.950 [2024-11-27T07:02:43.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.950 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:48.950 Verification LBA range: start 0x0 length 0x2000 00:19:48.950 TLSTESTn1 : 10.02 4837.89 18.90 0.00 0.00 26418.50 5926.73 36244.26 00:19:48.950 [2024-11-27T07:02:43.059Z] =================================================================================================================== 00:19:48.950 [2024-11-27T07:02:43.059Z] Total : 4837.89 18.90 0.00 0.00 26418.50 5926.73 36244.26 00:19:48.950 { 00:19:48.950 "results": [ 00:19:48.950 { 00:19:48.950 "job": "TLSTESTn1", 00:19:48.950 "core_mask": "0x4", 00:19:48.950 "workload": "verify", 00:19:48.950 "status": "finished", 00:19:48.950 "verify_range": { 00:19:48.950 "start": 0, 00:19:48.950 "length": 8192 00:19:48.950 }, 00:19:48.950 "queue_depth": 128, 00:19:48.950 "io_size": 4096, 00:19:48.950 "runtime": 10.022554, 00:19:48.950 "iops": 4837.888625992936, 00:19:48.950 "mibps": 18.898002445284906, 00:19:48.950 "io_failed": 0, 00:19:48.950 "io_timeout": 0, 00:19:48.950 "avg_latency_us": 26418.503342395787, 00:19:48.950 "min_latency_us": 5926.733913043478, 00:19:48.950 "max_latency_us": 36244.257391304345 00:19:48.950 } 00:19:48.950 ], 00:19:48.950 "core_count": 1 00:19:48.950 } 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:48.950 nvmf_trace.0 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2477998 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2477998 ']' 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2477998 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2477998 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2477998' 00:19:48.950 killing process with pid 2477998 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2477998 00:19:48.950 Received shutdown signal, test time was about 10.000000 seconds 00:19:48.950 00:19:48.950 Latency(us) 00:19:48.950 [2024-11-27T07:02:43.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.950 [2024-11-27T07:02:43.059Z] =================================================================================================================== 00:19:48.950 [2024-11-27T07:02:43.059Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:48.950 08:02:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2477998 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:49.209 rmmod nvme_tcp 00:19:49.209 rmmod nvme_fabrics 00:19:49.209 rmmod nvme_keyring 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2477775 ']' 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2477775 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2477775 ']' 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2477775 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2477775 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2477775' 00:19:49.209 killing process with pid 2477775 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2477775 00:19:49.209 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2477775 00:19:49.469 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:49.469 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:49.469 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:49.469 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:49.469 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:49.469 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:49.469 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:49.469 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:49.469 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:49.469 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.469 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:49.469 08:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.373 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:51.373 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.LwL 00:19:51.373 00:19:51.373 real 0m20.576s 00:19:51.373 user 0m21.648s 00:19:51.373 sys 0m9.474s 00:19:51.373 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:51.373 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:51.373 ************************************ 00:19:51.373 END TEST nvmf_fips 00:19:51.373 ************************************ 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:51.632 ************************************ 00:19:51.632 START TEST nvmf_control_msg_list 00:19:51.632 ************************************ 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:51.632 * Looking for test storage... 00:19:51.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:51.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.632 --rc genhtml_branch_coverage=1 00:19:51.632 --rc genhtml_function_coverage=1 00:19:51.632 --rc genhtml_legend=1 00:19:51.632 --rc geninfo_all_blocks=1 00:19:51.632 --rc geninfo_unexecuted_blocks=1 00:19:51.632 00:19:51.632 ' 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:51.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.632 --rc genhtml_branch_coverage=1 00:19:51.632 --rc genhtml_function_coverage=1 00:19:51.632 --rc genhtml_legend=1 00:19:51.632 --rc geninfo_all_blocks=1 00:19:51.632 --rc geninfo_unexecuted_blocks=1 00:19:51.632 00:19:51.632 ' 00:19:51.632 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:51.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.632 --rc genhtml_branch_coverage=1 00:19:51.632 --rc genhtml_function_coverage=1 00:19:51.632 --rc genhtml_legend=1 00:19:51.632 --rc geninfo_all_blocks=1 00:19:51.632 --rc geninfo_unexecuted_blocks=1 00:19:51.633 00:19:51.633 ' 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:51.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.633 --rc genhtml_branch_coverage=1 00:19:51.633 --rc genhtml_function_coverage=1 00:19:51.633 --rc genhtml_legend=1 00:19:51.633 --rc geninfo_all_blocks=1 00:19:51.633 --rc geninfo_unexecuted_blocks=1 00:19:51.633 00:19:51.633 ' 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:51.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:51.633 08:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.195 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:58.196 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:58.196 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:58.196 Found net devices under 0000:86:00.0: cvl_0_0 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:58.196 Found net devices under 0000:86:00.1: cvl_0_1 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:58.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:19:58.196 00:19:58.196 --- 10.0.0.2 ping statistics --- 00:19:58.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.196 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:58.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:19:58.196 00:19:58.196 --- 10.0.0.1 ping statistics --- 00:19:58.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.196 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2483181 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2483181 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2483181 ']' 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.196 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:58.196 [2024-11-27 08:02:51.409989] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:19:58.196 [2024-11-27 08:02:51.410037] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.197 [2024-11-27 08:02:51.476784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.197 [2024-11-27 08:02:51.520148] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.197 [2024-11-27 08:02:51.520183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.197 [2024-11-27 08:02:51.520190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.197 [2024-11-27 08:02:51.520197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.197 [2024-11-27 08:02:51.520202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.197 [2024-11-27 08:02:51.520798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:58.197 [2024-11-27 08:02:51.654556] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:58.197 Malloc0 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:58.197 [2024-11-27 08:02:51.702984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2483383 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2483385 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2483387 00:19:58.197 08:02:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2483383 00:19:58.197 [2024-11-27 08:02:51.757362] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:58.197 [2024-11-27 08:02:51.777405] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:58.197 [2024-11-27 08:02:51.777560] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:59.131 Initializing NVMe Controllers 00:19:59.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:59.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:59.131 Initialization complete. Launching workers. 00:19:59.131 ======================================================== 00:19:59.131 Latency(us) 00:19:59.131 Device Information : IOPS MiB/s Average min max 00:19:59.131 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 36.00 0.14 28562.08 172.18 41923.87 00:19:59.131 ======================================================== 00:19:59.131 Total : 36.00 0.14 28562.08 172.18 41923.87 00:19:59.131 00:19:59.131 Initializing NVMe Controllers 00:19:59.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:59.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:59.131 Initialization complete. Launching workers. 00:19:59.131 ======================================================== 00:19:59.131 Latency(us) 00:19:59.131 Device Information : IOPS MiB/s Average min max 00:19:59.131 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6478.00 25.30 155.67 140.16 40879.17 00:19:59.131 ======================================================== 00:19:59.131 Total : 6478.00 25.30 155.67 140.16 40879.17 00:19:59.131 00:19:59.132 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2483385 00:19:59.132 Initializing NVMe Controllers 00:19:59.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:59.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:59.132 Initialization complete. Launching workers. 00:19:59.132 ======================================================== 00:19:59.132 Latency(us) 00:19:59.132 Device Information : IOPS MiB/s Average min max 00:19:59.132 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 41358.34 40804.21 41924.37 00:19:59.132 ======================================================== 00:19:59.132 Total : 25.00 0.10 41358.34 40804.21 41924.37 00:19:59.132 00:19:59.132 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2483387 00:19:59.132 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:59.132 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:59.132 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:59.132 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:59.132 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:59.132 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:59.132 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:59.132 08:02:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:59.132 rmmod nvme_tcp 00:19:59.132 rmmod nvme_fabrics 00:19:59.132 rmmod nvme_keyring 00:19:59.132 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:59.132 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:59.132 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:59.132 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2483181 ']' 00:19:59.132 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2483181 00:19:59.132 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2483181 ']' 00:19:59.132 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2483181 00:19:59.132 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:59.132 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.132 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2483181 00:19:59.132 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:59.132 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:59.132 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2483181' 00:19:59.132 killing process with pid 2483181 00:19:59.132 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2483181 00:19:59.132 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2483181 00:19:59.390 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:59.390 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:59.390 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:59.390 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:59.390 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:59.390 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:59.390 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:59.390 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:59.390 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:59.390 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.390 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:59.390 08:02:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.292 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:01.292 00:20:01.292 real 0m9.797s 00:20:01.292 user 0m6.637s 00:20:01.292 sys 0m5.143s 00:20:01.292 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.292 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:01.292 ************************************ 00:20:01.292 END TEST nvmf_control_msg_list 00:20:01.292 ************************************ 00:20:01.292 08:02:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:01.292 08:02:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:01.292 08:02:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.292 08:02:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:01.292 ************************************ 00:20:01.292 START TEST nvmf_wait_for_buf 00:20:01.292 ************************************ 00:20:01.292 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:01.551 * Looking for test storage... 00:20:01.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:01.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.551 --rc genhtml_branch_coverage=1 00:20:01.551 --rc genhtml_function_coverage=1 00:20:01.551 --rc genhtml_legend=1 00:20:01.551 --rc geninfo_all_blocks=1 00:20:01.551 --rc geninfo_unexecuted_blocks=1 00:20:01.551 00:20:01.551 ' 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:01.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.551 --rc genhtml_branch_coverage=1 00:20:01.551 --rc genhtml_function_coverage=1 00:20:01.551 --rc genhtml_legend=1 00:20:01.551 --rc geninfo_all_blocks=1 00:20:01.551 --rc geninfo_unexecuted_blocks=1 00:20:01.551 00:20:01.551 ' 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:01.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.551 --rc genhtml_branch_coverage=1 00:20:01.551 --rc genhtml_function_coverage=1 00:20:01.551 --rc genhtml_legend=1 00:20:01.551 --rc geninfo_all_blocks=1 00:20:01.551 --rc geninfo_unexecuted_blocks=1 00:20:01.551 00:20:01.551 ' 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:01.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.551 --rc genhtml_branch_coverage=1 00:20:01.551 --rc genhtml_function_coverage=1 00:20:01.551 --rc genhtml_legend=1 00:20:01.551 --rc geninfo_all_blocks=1 00:20:01.551 --rc geninfo_unexecuted_blocks=1 00:20:01.551 00:20:01.551 ' 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.551 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:01.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:01.552 08:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.111 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:08.112 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:08.112 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:08.112 Found net devices under 0000:86:00.0: cvl_0_0 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:08.112 Found net devices under 0000:86:00.1: cvl_0_1 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:08.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:20:08.112 00:20:08.112 --- 10.0.0.2 ping statistics --- 00:20:08.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.112 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:20:08.112 00:20:08.112 --- 10.0.0.1 ping statistics --- 00:20:08.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.112 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2487036 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2487036 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2487036 ']' 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.112 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:08.112 [2024-11-27 08:03:01.353774] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:20:08.112 [2024-11-27 08:03:01.353822] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.113 [2024-11-27 08:03:01.422435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.113 [2024-11-27 08:03:01.464099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.113 [2024-11-27 08:03:01.464136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.113 [2024-11-27 08:03:01.464144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.113 [2024-11-27 08:03:01.464152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.113 [2024-11-27 08:03:01.464175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.113 [2024-11-27 08:03:01.464795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:08.113 Malloc0 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:08.113 [2024-11-27 08:03:01.647432] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:08.113 [2024-11-27 08:03:01.671633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.113 08:03:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:08.113 [2024-11-27 08:03:01.751047] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:09.043 Initializing NVMe Controllers 00:20:09.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:09.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:09.043 Initialization complete. Launching workers. 00:20:09.043 ======================================================== 00:20:09.043 Latency(us) 00:20:09.043 Device Information : IOPS MiB/s Average min max 00:20:09.043 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32238.51 7260.95 63848.83 00:20:09.043 ======================================================== 00:20:09.043 Total : 129.00 16.12 32238.51 7260.95 63848.83 00:20:09.043 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:09.301 rmmod nvme_tcp 00:20:09.301 rmmod nvme_fabrics 00:20:09.301 rmmod nvme_keyring 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2487036 ']' 00:20:09.301 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2487036 00:20:09.302 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2487036 ']' 00:20:09.302 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2487036 00:20:09.302 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:09.302 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.302 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2487036 00:20:09.302 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:09.302 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:09.302 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2487036' 00:20:09.302 killing process with pid 2487036 00:20:09.302 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2487036 00:20:09.302 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2487036 00:20:09.560 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:09.560 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:09.560 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:09.560 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:09.560 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:09.560 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:09.560 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:09.560 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:09.560 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:09.560 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.560 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.560 08:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.462 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:11.462 00:20:11.462 real 0m10.128s 00:20:11.462 user 0m3.818s 00:20:11.462 sys 0m4.743s 00:20:11.462 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.462 08:03:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:11.462 ************************************ 00:20:11.462 END TEST nvmf_wait_for_buf 00:20:11.462 ************************************ 00:20:11.462 08:03:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:11.462 08:03:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:11.462 08:03:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:11.462 08:03:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:11.462 08:03:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:11.462 08:03:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:16.728 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:16.728 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:16.728 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:16.729 Found net devices under 0000:86:00.0: cvl_0_0 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:16.729 Found net devices under 0000:86:00.1: cvl_0_1 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:16.729 ************************************ 00:20:16.729 START TEST nvmf_perf_adq 00:20:16.729 ************************************ 00:20:16.729 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:16.729 * Looking for test storage... 00:20:16.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:16.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.988 --rc genhtml_branch_coverage=1 00:20:16.988 --rc genhtml_function_coverage=1 00:20:16.988 --rc genhtml_legend=1 00:20:16.988 --rc geninfo_all_blocks=1 00:20:16.988 --rc geninfo_unexecuted_blocks=1 00:20:16.988 00:20:16.988 ' 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:16.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.988 --rc genhtml_branch_coverage=1 00:20:16.988 --rc genhtml_function_coverage=1 00:20:16.988 --rc genhtml_legend=1 00:20:16.988 --rc geninfo_all_blocks=1 00:20:16.988 --rc geninfo_unexecuted_blocks=1 00:20:16.988 00:20:16.988 ' 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:16.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.988 --rc genhtml_branch_coverage=1 00:20:16.988 --rc genhtml_function_coverage=1 00:20:16.988 --rc genhtml_legend=1 00:20:16.988 --rc geninfo_all_blocks=1 00:20:16.988 --rc geninfo_unexecuted_blocks=1 00:20:16.988 00:20:16.988 ' 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:16.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.988 --rc genhtml_branch_coverage=1 00:20:16.988 --rc genhtml_function_coverage=1 00:20:16.988 --rc genhtml_legend=1 00:20:16.988 --rc geninfo_all_blocks=1 00:20:16.988 --rc geninfo_unexecuted_blocks=1 00:20:16.988 00:20:16.988 ' 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:16.988 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:16.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:16.989 08:03:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.249 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:22.249 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:22.249 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:22.249 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:22.249 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:22.249 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:22.249 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:22.249 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:22.249 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:22.249 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:22.249 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:22.249 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:22.249 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:22.249 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:22.249 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:22.249 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:22.250 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:22.250 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:22.250 Found net devices under 0000:86:00.0: cvl_0_0 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:22.250 Found net devices under 0000:86:00.1: cvl_0_1 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:22.250 08:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:22.816 08:03:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:24.717 08:03:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:29.998 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:29.998 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:29.999 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:29.999 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:29.999 Found net devices under 0000:86:00.0: cvl_0_0 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:29.999 Found net devices under 0000:86:00.1: cvl_0_1 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:29.999 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:30.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:20:30.000 00:20:30.000 --- 10.0.0.2 ping statistics --- 00:20:30.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.000 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:30.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:20:30.000 00:20:30.000 --- 10.0.0.1 ping statistics --- 00:20:30.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.000 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2495578 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2495578 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2495578 ']' 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.000 08:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:30.000 [2024-11-27 08:03:24.006978] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:20:30.000 [2024-11-27 08:03:24.007022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.000 [2024-11-27 08:03:24.073294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:30.335 [2024-11-27 08:03:24.120429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.335 [2024-11-27 08:03:24.120463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.335 [2024-11-27 08:03:24.120470] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.335 [2024-11-27 08:03:24.120476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.335 [2024-11-27 08:03:24.120481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.335 [2024-11-27 08:03:24.122026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.335 [2024-11-27 08:03:24.122047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.335 [2024-11-27 08:03:24.122066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:30.335 [2024-11-27 08:03:24.122068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.335 [2024-11-27 08:03:24.333461] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.335 Malloc1 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.335 [2024-11-27 08:03:24.391363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2495772 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:30.335 08:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:32.342 08:03:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:32.342 08:03:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.342 08:03:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:32.342 08:03:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.342 08:03:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:32.342 "tick_rate": 2300000000, 00:20:32.342 "poll_groups": [ 00:20:32.342 { 00:20:32.342 "name": "nvmf_tgt_poll_group_000", 00:20:32.342 "admin_qpairs": 1, 00:20:32.342 "io_qpairs": 1, 00:20:32.342 "current_admin_qpairs": 1, 00:20:32.342 "current_io_qpairs": 1, 00:20:32.342 "pending_bdev_io": 0, 00:20:32.342 "completed_nvme_io": 19282, 00:20:32.342 "transports": [ 00:20:32.342 { 00:20:32.342 "trtype": "TCP" 00:20:32.342 } 00:20:32.342 ] 00:20:32.342 }, 00:20:32.342 { 00:20:32.342 "name": "nvmf_tgt_poll_group_001", 00:20:32.342 "admin_qpairs": 0, 00:20:32.342 "io_qpairs": 1, 00:20:32.342 "current_admin_qpairs": 0, 00:20:32.342 "current_io_qpairs": 1, 00:20:32.342 "pending_bdev_io": 0, 00:20:32.342 "completed_nvme_io": 19416, 00:20:32.342 "transports": [ 00:20:32.342 { 00:20:32.342 "trtype": "TCP" 00:20:32.342 } 00:20:32.342 ] 00:20:32.342 }, 00:20:32.342 { 00:20:32.342 "name": "nvmf_tgt_poll_group_002", 00:20:32.342 "admin_qpairs": 0, 00:20:32.342 "io_qpairs": 1, 00:20:32.342 "current_admin_qpairs": 0, 00:20:32.342 "current_io_qpairs": 1, 00:20:32.342 "pending_bdev_io": 0, 00:20:32.342 "completed_nvme_io": 19442, 00:20:32.342 "transports": [ 00:20:32.342 { 00:20:32.342 "trtype": "TCP" 00:20:32.342 } 00:20:32.342 ] 00:20:32.342 }, 00:20:32.342 { 00:20:32.342 "name": "nvmf_tgt_poll_group_003", 00:20:32.342 "admin_qpairs": 0, 00:20:32.342 "io_qpairs": 1, 00:20:32.342 "current_admin_qpairs": 0, 00:20:32.342 "current_io_qpairs": 1, 00:20:32.342 "pending_bdev_io": 0, 00:20:32.342 "completed_nvme_io": 19234, 00:20:32.342 "transports": [ 00:20:32.342 { 00:20:32.342 "trtype": "TCP" 00:20:32.342 } 00:20:32.342 ] 00:20:32.342 } 00:20:32.342 ] 00:20:32.343 }' 00:20:32.343 08:03:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:32.343 08:03:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:32.600 08:03:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:32.600 08:03:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:32.600 08:03:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2495772 00:20:40.714 Initializing NVMe Controllers 00:20:40.714 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:40.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:40.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:40.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:40.714 Initialization complete. Launching workers. 00:20:40.714 ======================================================== 00:20:40.714 Latency(us) 00:20:40.714 Device Information : IOPS MiB/s Average min max 00:20:40.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10162.30 39.70 6298.22 1520.51 10868.64 00:20:40.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10333.00 40.36 6194.03 2410.04 10551.13 00:20:40.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10316.20 40.30 6204.54 2104.43 10771.14 00:20:40.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10213.70 39.90 6265.20 2283.09 14702.09 00:20:40.714 ======================================================== 00:20:40.714 Total : 41025.20 160.25 6240.20 1520.51 14702.09 00:20:40.715 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:40.715 rmmod nvme_tcp 00:20:40.715 rmmod nvme_fabrics 00:20:40.715 rmmod nvme_keyring 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2495578 ']' 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2495578 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2495578 ']' 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2495578 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2495578 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2495578' 00:20:40.715 killing process with pid 2495578 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2495578 00:20:40.715 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2495578 00:20:40.974 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:40.974 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:40.974 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:40.974 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:40.974 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:40.974 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:40.974 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:40.974 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:40.974 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:40.974 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.974 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.974 08:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.877 08:03:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:42.877 08:03:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:42.877 08:03:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:42.877 08:03:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:44.257 08:03:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:46.160 08:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:51.434 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:51.434 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:51.434 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.434 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:51.434 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:51.434 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:51.434 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.434 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.434 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.434 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:51.434 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:51.434 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:51.434 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.434 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:51.434 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:51.434 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:51.434 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:51.435 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:51.435 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:51.435 Found net devices under 0000:86:00.0: cvl_0_0 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:51.435 Found net devices under 0000:86:00.1: cvl_0_1 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:51.435 08:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:51.435 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:51.435 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:51.435 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:51.435 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:51.435 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:51.435 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:51.435 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:51.435 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:51.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:20:51.435 00:20:51.435 --- 10.0.0.2 ping statistics --- 00:20:51.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.435 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:20:51.435 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:51.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:20:51.435 00:20:51.435 --- 10.0.0.1 ping statistics --- 00:20:51.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.435 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:20:51.435 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.435 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:51.435 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:51.435 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:51.436 net.core.busy_poll = 1 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:51.436 net.core.busy_read = 1 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2499462 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2499462 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2499462 ']' 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.436 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.436 [2024-11-27 08:03:45.462224] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:20:51.436 [2024-11-27 08:03:45.462273] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.436 [2024-11-27 08:03:45.529857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:51.695 [2024-11-27 08:03:45.574691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.695 [2024-11-27 08:03:45.574730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.695 [2024-11-27 08:03:45.574738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.695 [2024-11-27 08:03:45.574744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.696 [2024-11-27 08:03:45.574749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.696 [2024-11-27 08:03:45.576174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.696 [2024-11-27 08:03:45.576272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.696 [2024-11-27 08:03:45.576364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:51.696 [2024-11-27 08:03:45.576366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.696 [2024-11-27 08:03:45.790518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.696 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.990 Malloc1 00:20:51.990 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.990 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:51.990 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.990 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.990 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.990 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:51.990 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.990 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.990 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.990 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:51.990 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.990 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.990 [2024-11-27 08:03:45.851681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.990 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.990 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2499637 00:20:51.990 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:51.990 08:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:53.892 08:03:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:53.892 08:03:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.892 08:03:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:53.892 08:03:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.892 08:03:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:53.892 "tick_rate": 2300000000, 00:20:53.892 "poll_groups": [ 00:20:53.892 { 00:20:53.892 "name": "nvmf_tgt_poll_group_000", 00:20:53.892 "admin_qpairs": 1, 00:20:53.892 "io_qpairs": 3, 00:20:53.892 "current_admin_qpairs": 1, 00:20:53.892 "current_io_qpairs": 3, 00:20:53.892 "pending_bdev_io": 0, 00:20:53.892 "completed_nvme_io": 29290, 00:20:53.892 "transports": [ 00:20:53.892 { 00:20:53.892 "trtype": "TCP" 00:20:53.892 } 00:20:53.892 ] 00:20:53.892 }, 00:20:53.892 { 00:20:53.892 "name": "nvmf_tgt_poll_group_001", 00:20:53.892 "admin_qpairs": 0, 00:20:53.892 "io_qpairs": 1, 00:20:53.892 "current_admin_qpairs": 0, 00:20:53.892 "current_io_qpairs": 1, 00:20:53.892 "pending_bdev_io": 0, 00:20:53.892 "completed_nvme_io": 27566, 00:20:53.892 "transports": [ 00:20:53.892 { 00:20:53.892 "trtype": "TCP" 00:20:53.892 } 00:20:53.892 ] 00:20:53.892 }, 00:20:53.892 { 00:20:53.892 "name": "nvmf_tgt_poll_group_002", 00:20:53.892 "admin_qpairs": 0, 00:20:53.892 "io_qpairs": 0, 00:20:53.892 "current_admin_qpairs": 0, 00:20:53.892 "current_io_qpairs": 0, 00:20:53.892 "pending_bdev_io": 0, 00:20:53.892 "completed_nvme_io": 0, 00:20:53.892 "transports": [ 00:20:53.892 { 00:20:53.892 "trtype": "TCP" 00:20:53.892 } 00:20:53.892 ] 00:20:53.892 }, 00:20:53.892 { 00:20:53.892 "name": "nvmf_tgt_poll_group_003", 00:20:53.892 "admin_qpairs": 0, 00:20:53.892 "io_qpairs": 0, 00:20:53.892 "current_admin_qpairs": 0, 00:20:53.892 "current_io_qpairs": 0, 00:20:53.892 "pending_bdev_io": 0, 00:20:53.892 "completed_nvme_io": 0, 00:20:53.892 "transports": [ 00:20:53.892 { 00:20:53.892 "trtype": "TCP" 00:20:53.892 } 00:20:53.892 ] 00:20:53.892 } 00:20:53.892 ] 00:20:53.892 }' 00:20:53.892 08:03:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:53.892 08:03:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:53.892 08:03:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:53.892 08:03:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:53.892 08:03:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2499637 00:21:02.016 Initializing NVMe Controllers 00:21:02.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:02.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:02.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:02.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:02.016 Initialization complete. Launching workers. 00:21:02.016 ======================================================== 00:21:02.016 Latency(us) 00:21:02.016 Device Information : IOPS MiB/s Average min max 00:21:02.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5341.19 20.86 11983.99 1521.94 57672.28 00:21:02.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5235.19 20.45 12224.35 1500.62 58531.87 00:21:02.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 15079.08 58.90 4243.77 1352.75 46064.27 00:21:02.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4842.60 18.92 13217.61 1675.69 61500.23 00:21:02.016 ======================================================== 00:21:02.016 Total : 30498.07 119.13 8394.15 1352.75 61500.23 00:21:02.016 00:21:02.016 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:02.016 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:02.016 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:02.016 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:02.016 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:02.016 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:02.016 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:02.016 rmmod nvme_tcp 00:21:02.016 rmmod nvme_fabrics 00:21:02.016 rmmod nvme_keyring 00:21:02.016 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:02.016 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:02.016 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:02.016 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2499462 ']' 00:21:02.016 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2499462 00:21:02.016 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2499462 ']' 00:21:02.016 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2499462 00:21:02.016 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:02.016 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.016 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2499462 00:21:02.274 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:02.274 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:02.274 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2499462' 00:21:02.274 killing process with pid 2499462 00:21:02.274 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2499462 00:21:02.274 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2499462 00:21:02.274 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:02.274 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:02.274 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:02.274 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:02.274 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:02.274 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:02.274 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:02.274 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:02.274 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:02.274 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.274 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.274 08:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:05.560 00:21:05.560 real 0m48.663s 00:21:05.560 user 2m43.381s 00:21:05.560 sys 0m9.662s 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.560 ************************************ 00:21:05.560 END TEST nvmf_perf_adq 00:21:05.560 ************************************ 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:05.560 ************************************ 00:21:05.560 START TEST nvmf_shutdown 00:21:05.560 ************************************ 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:05.560 * Looking for test storage... 00:21:05.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:05.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.560 --rc genhtml_branch_coverage=1 00:21:05.560 --rc genhtml_function_coverage=1 00:21:05.560 --rc genhtml_legend=1 00:21:05.560 --rc geninfo_all_blocks=1 00:21:05.560 --rc geninfo_unexecuted_blocks=1 00:21:05.560 00:21:05.560 ' 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:05.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.560 --rc genhtml_branch_coverage=1 00:21:05.560 --rc genhtml_function_coverage=1 00:21:05.560 --rc genhtml_legend=1 00:21:05.560 --rc geninfo_all_blocks=1 00:21:05.560 --rc geninfo_unexecuted_blocks=1 00:21:05.560 00:21:05.560 ' 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:05.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.560 --rc genhtml_branch_coverage=1 00:21:05.560 --rc genhtml_function_coverage=1 00:21:05.560 --rc genhtml_legend=1 00:21:05.560 --rc geninfo_all_blocks=1 00:21:05.560 --rc geninfo_unexecuted_blocks=1 00:21:05.560 00:21:05.560 ' 00:21:05.560 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:05.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.560 --rc genhtml_branch_coverage=1 00:21:05.560 --rc genhtml_function_coverage=1 00:21:05.560 --rc genhtml_legend=1 00:21:05.560 --rc geninfo_all_blocks=1 00:21:05.560 --rc geninfo_unexecuted_blocks=1 00:21:05.561 00:21:05.561 ' 00:21:05.561 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.819 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:05.819 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.819 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.819 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.819 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.819 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:05.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:05.820 ************************************ 00:21:05.820 START TEST nvmf_shutdown_tc1 00:21:05.820 ************************************ 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:05.820 08:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:11.092 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:11.092 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.092 08:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.092 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.092 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.092 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.092 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.092 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:11.092 Found net devices under 0000:86:00.0: cvl_0_0 00:21:11.092 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.092 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.092 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.092 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.092 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.092 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.092 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.092 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.092 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:11.092 Found net devices under 0000:86:00.1: cvl_0_1 00:21:11.092 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.092 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:11.092 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:11.092 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:11.093 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.352 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.352 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.352 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:11.352 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:11.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:21:11.352 00:21:11.352 --- 10.0.0.2 ping statistics --- 00:21:11.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.352 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:21:11.352 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:21:11.352 00:21:11.352 --- 10.0.0.1 ping statistics --- 00:21:11.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.352 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:21:11.352 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.352 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:11.352 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:11.352 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.352 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:11.353 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:11.353 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.353 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:11.353 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:11.353 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:11.353 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:11.353 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:11.353 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:11.353 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2504925 00:21:11.353 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2504925 00:21:11.353 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:11.353 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2504925 ']' 00:21:11.353 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.353 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.353 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.353 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.353 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:11.353 [2024-11-27 08:04:05.332156] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:21:11.353 [2024-11-27 08:04:05.332201] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.353 [2024-11-27 08:04:05.398640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:11.353 [2024-11-27 08:04:05.441505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.353 [2024-11-27 08:04:05.441544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.353 [2024-11-27 08:04:05.441552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.353 [2024-11-27 08:04:05.441558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.353 [2024-11-27 08:04:05.441563] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.353 [2024-11-27 08:04:05.443266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.353 [2024-11-27 08:04:05.443329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.353 [2024-11-27 08:04:05.443445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.353 [2024-11-27 08:04:05.443446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:11.612 [2024-11-27 08:04:05.582097] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.612 08:04:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:11.612 Malloc1 00:21:11.612 [2024-11-27 08:04:05.692865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.612 Malloc2 00:21:11.871 Malloc3 00:21:11.871 Malloc4 00:21:11.871 Malloc5 00:21:11.871 Malloc6 00:21:11.871 Malloc7 00:21:12.129 Malloc8 00:21:12.129 Malloc9 00:21:12.129 Malloc10 00:21:12.129 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.129 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:12.129 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:12.129 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:12.129 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2505138 00:21:12.129 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2505138 /var/tmp/bdevperf.sock 00:21:12.129 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2505138 ']' 00:21:12.129 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:12.129 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:12.129 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:12.129 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.129 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:12.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:12.129 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:12.129 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.129 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:12.129 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:12.129 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:12.129 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:12.129 { 00:21:12.129 "params": { 00:21:12.129 "name": "Nvme$subsystem", 00:21:12.129 "trtype": "$TEST_TRANSPORT", 00:21:12.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.129 "adrfam": "ipv4", 00:21:12.129 "trsvcid": "$NVMF_PORT", 00:21:12.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.130 "hdgst": ${hdgst:-false}, 00:21:12.130 "ddgst": ${ddgst:-false} 00:21:12.130 }, 00:21:12.130 "method": "bdev_nvme_attach_controller" 00:21:12.130 } 00:21:12.130 EOF 00:21:12.130 )") 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:12.130 { 00:21:12.130 "params": { 00:21:12.130 "name": "Nvme$subsystem", 00:21:12.130 "trtype": "$TEST_TRANSPORT", 00:21:12.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.130 "adrfam": "ipv4", 00:21:12.130 "trsvcid": "$NVMF_PORT", 00:21:12.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.130 "hdgst": ${hdgst:-false}, 00:21:12.130 "ddgst": ${ddgst:-false} 00:21:12.130 }, 00:21:12.130 "method": "bdev_nvme_attach_controller" 00:21:12.130 } 00:21:12.130 EOF 00:21:12.130 )") 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:12.130 { 00:21:12.130 "params": { 00:21:12.130 "name": "Nvme$subsystem", 00:21:12.130 "trtype": "$TEST_TRANSPORT", 00:21:12.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.130 "adrfam": "ipv4", 00:21:12.130 "trsvcid": "$NVMF_PORT", 00:21:12.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.130 "hdgst": ${hdgst:-false}, 00:21:12.130 "ddgst": ${ddgst:-false} 00:21:12.130 }, 00:21:12.130 "method": "bdev_nvme_attach_controller" 00:21:12.130 } 00:21:12.130 EOF 00:21:12.130 )") 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:12.130 { 00:21:12.130 "params": { 00:21:12.130 "name": "Nvme$subsystem", 00:21:12.130 "trtype": "$TEST_TRANSPORT", 00:21:12.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.130 "adrfam": "ipv4", 00:21:12.130 "trsvcid": "$NVMF_PORT", 00:21:12.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.130 "hdgst": ${hdgst:-false}, 00:21:12.130 "ddgst": ${ddgst:-false} 00:21:12.130 }, 00:21:12.130 "method": "bdev_nvme_attach_controller" 00:21:12.130 } 00:21:12.130 EOF 00:21:12.130 )") 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:12.130 { 00:21:12.130 "params": { 00:21:12.130 "name": "Nvme$subsystem", 00:21:12.130 "trtype": "$TEST_TRANSPORT", 00:21:12.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.130 "adrfam": "ipv4", 00:21:12.130 "trsvcid": "$NVMF_PORT", 00:21:12.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.130 "hdgst": ${hdgst:-false}, 00:21:12.130 "ddgst": ${ddgst:-false} 00:21:12.130 }, 00:21:12.130 "method": "bdev_nvme_attach_controller" 00:21:12.130 } 00:21:12.130 EOF 00:21:12.130 )") 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:12.130 { 00:21:12.130 "params": { 00:21:12.130 "name": "Nvme$subsystem", 00:21:12.130 "trtype": "$TEST_TRANSPORT", 00:21:12.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.130 "adrfam": "ipv4", 00:21:12.130 "trsvcid": "$NVMF_PORT", 00:21:12.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.130 "hdgst": ${hdgst:-false}, 00:21:12.130 "ddgst": ${ddgst:-false} 00:21:12.130 }, 00:21:12.130 "method": "bdev_nvme_attach_controller" 00:21:12.130 } 00:21:12.130 EOF 00:21:12.130 )") 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:12.130 { 00:21:12.130 "params": { 00:21:12.130 "name": "Nvme$subsystem", 00:21:12.130 "trtype": "$TEST_TRANSPORT", 00:21:12.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.130 "adrfam": "ipv4", 00:21:12.130 "trsvcid": "$NVMF_PORT", 00:21:12.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.130 "hdgst": ${hdgst:-false}, 00:21:12.130 "ddgst": ${ddgst:-false} 00:21:12.130 }, 00:21:12.130 "method": "bdev_nvme_attach_controller" 00:21:12.130 } 00:21:12.130 EOF 00:21:12.130 )") 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:12.130 [2024-11-27 08:04:06.175380] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:21:12.130 [2024-11-27 08:04:06.175428] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:12.130 { 00:21:12.130 "params": { 00:21:12.130 "name": "Nvme$subsystem", 00:21:12.130 "trtype": "$TEST_TRANSPORT", 00:21:12.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.130 "adrfam": "ipv4", 00:21:12.130 "trsvcid": "$NVMF_PORT", 00:21:12.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.130 "hdgst": ${hdgst:-false}, 00:21:12.130 "ddgst": ${ddgst:-false} 00:21:12.130 }, 00:21:12.130 "method": "bdev_nvme_attach_controller" 00:21:12.130 } 00:21:12.130 EOF 00:21:12.130 )") 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:12.130 { 00:21:12.130 "params": { 00:21:12.130 "name": "Nvme$subsystem", 00:21:12.130 "trtype": "$TEST_TRANSPORT", 00:21:12.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.130 "adrfam": "ipv4", 00:21:12.130 "trsvcid": "$NVMF_PORT", 00:21:12.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.130 "hdgst": ${hdgst:-false}, 00:21:12.130 "ddgst": ${ddgst:-false} 00:21:12.130 }, 00:21:12.130 "method": "bdev_nvme_attach_controller" 00:21:12.130 } 00:21:12.130 EOF 00:21:12.130 )") 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:12.130 { 00:21:12.130 "params": { 00:21:12.130 "name": "Nvme$subsystem", 00:21:12.130 "trtype": "$TEST_TRANSPORT", 00:21:12.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.130 "adrfam": "ipv4", 00:21:12.130 "trsvcid": "$NVMF_PORT", 00:21:12.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.130 "hdgst": ${hdgst:-false}, 00:21:12.130 "ddgst": ${ddgst:-false} 00:21:12.130 }, 00:21:12.130 "method": "bdev_nvme_attach_controller" 00:21:12.130 } 00:21:12.130 EOF 00:21:12.130 )") 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:12.130 08:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:12.130 "params": { 00:21:12.130 "name": "Nvme1", 00:21:12.130 "trtype": "tcp", 00:21:12.130 "traddr": "10.0.0.2", 00:21:12.130 "adrfam": "ipv4", 00:21:12.130 "trsvcid": "4420", 00:21:12.130 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.130 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:12.131 "hdgst": false, 00:21:12.131 "ddgst": false 00:21:12.131 }, 00:21:12.131 "method": "bdev_nvme_attach_controller" 00:21:12.131 },{ 00:21:12.131 "params": { 00:21:12.131 "name": "Nvme2", 00:21:12.131 "trtype": "tcp", 00:21:12.131 "traddr": "10.0.0.2", 00:21:12.131 "adrfam": "ipv4", 00:21:12.131 "trsvcid": "4420", 00:21:12.131 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:12.131 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:12.131 "hdgst": false, 00:21:12.131 "ddgst": false 00:21:12.131 }, 00:21:12.131 "method": "bdev_nvme_attach_controller" 00:21:12.131 },{ 00:21:12.131 "params": { 00:21:12.131 "name": "Nvme3", 00:21:12.131 "trtype": "tcp", 00:21:12.131 "traddr": "10.0.0.2", 00:21:12.131 "adrfam": "ipv4", 00:21:12.131 "trsvcid": "4420", 00:21:12.131 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:12.131 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:12.131 "hdgst": false, 00:21:12.131 "ddgst": false 00:21:12.131 }, 00:21:12.131 "method": "bdev_nvme_attach_controller" 00:21:12.131 },{ 00:21:12.131 "params": { 00:21:12.131 "name": "Nvme4", 00:21:12.131 "trtype": "tcp", 00:21:12.131 "traddr": "10.0.0.2", 00:21:12.131 "adrfam": "ipv4", 00:21:12.131 "trsvcid": "4420", 00:21:12.131 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:12.131 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:12.131 "hdgst": false, 00:21:12.131 "ddgst": false 00:21:12.131 }, 00:21:12.131 "method": "bdev_nvme_attach_controller" 00:21:12.131 },{ 00:21:12.131 "params": { 00:21:12.131 "name": "Nvme5", 00:21:12.131 "trtype": "tcp", 00:21:12.131 "traddr": "10.0.0.2", 00:21:12.131 "adrfam": "ipv4", 00:21:12.131 "trsvcid": "4420", 00:21:12.131 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:12.131 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:12.131 "hdgst": false, 00:21:12.131 "ddgst": false 00:21:12.131 }, 00:21:12.131 "method": "bdev_nvme_attach_controller" 00:21:12.131 },{ 00:21:12.131 "params": { 00:21:12.131 "name": "Nvme6", 00:21:12.131 "trtype": "tcp", 00:21:12.131 "traddr": "10.0.0.2", 00:21:12.131 "adrfam": "ipv4", 00:21:12.131 "trsvcid": "4420", 00:21:12.131 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:12.131 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:12.131 "hdgst": false, 00:21:12.131 "ddgst": false 00:21:12.131 }, 00:21:12.131 "method": "bdev_nvme_attach_controller" 00:21:12.131 },{ 00:21:12.131 "params": { 00:21:12.131 "name": "Nvme7", 00:21:12.131 "trtype": "tcp", 00:21:12.131 "traddr": "10.0.0.2", 00:21:12.131 "adrfam": "ipv4", 00:21:12.131 "trsvcid": "4420", 00:21:12.131 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:12.131 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:12.131 "hdgst": false, 00:21:12.131 "ddgst": false 00:21:12.131 }, 00:21:12.131 "method": "bdev_nvme_attach_controller" 00:21:12.131 },{ 00:21:12.131 "params": { 00:21:12.131 "name": "Nvme8", 00:21:12.131 "trtype": "tcp", 00:21:12.131 "traddr": "10.0.0.2", 00:21:12.131 "adrfam": "ipv4", 00:21:12.131 "trsvcid": "4420", 00:21:12.131 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:12.131 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:12.131 "hdgst": false, 00:21:12.131 "ddgst": false 00:21:12.131 }, 00:21:12.131 "method": "bdev_nvme_attach_controller" 00:21:12.131 },{ 00:21:12.131 "params": { 00:21:12.131 "name": "Nvme9", 00:21:12.131 "trtype": "tcp", 00:21:12.131 "traddr": "10.0.0.2", 00:21:12.131 "adrfam": "ipv4", 00:21:12.131 "trsvcid": "4420", 00:21:12.131 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:12.131 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:12.131 "hdgst": false, 00:21:12.131 "ddgst": false 00:21:12.131 }, 00:21:12.131 "method": "bdev_nvme_attach_controller" 00:21:12.131 },{ 00:21:12.131 "params": { 00:21:12.131 "name": "Nvme10", 00:21:12.131 "trtype": "tcp", 00:21:12.131 "traddr": "10.0.0.2", 00:21:12.131 "adrfam": "ipv4", 00:21:12.131 "trsvcid": "4420", 00:21:12.131 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:12.131 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:12.131 "hdgst": false, 00:21:12.131 "ddgst": false 00:21:12.131 }, 00:21:12.131 "method": "bdev_nvme_attach_controller" 00:21:12.131 }' 00:21:12.390 [2024-11-27 08:04:06.240374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.390 [2024-11-27 08:04:06.282114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.290 08:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.290 08:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:14.290 08:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:14.290 08:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.290 08:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:14.290 08:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.290 08:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2505138 00:21:14.290 08:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:14.290 08:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:15.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2505138 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:15.225 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2504925 00:21:15.225 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:15.225 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:15.225 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:15.225 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:15.225 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.225 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.225 { 00:21:15.225 "params": { 00:21:15.225 "name": "Nvme$subsystem", 00:21:15.225 "trtype": "$TEST_TRANSPORT", 00:21:15.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.225 "adrfam": "ipv4", 00:21:15.225 "trsvcid": "$NVMF_PORT", 00:21:15.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.225 "hdgst": ${hdgst:-false}, 00:21:15.225 "ddgst": ${ddgst:-false} 00:21:15.225 }, 00:21:15.226 "method": "bdev_nvme_attach_controller" 00:21:15.226 } 00:21:15.226 EOF 00:21:15.226 )") 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.226 { 00:21:15.226 "params": { 00:21:15.226 "name": "Nvme$subsystem", 00:21:15.226 "trtype": "$TEST_TRANSPORT", 00:21:15.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.226 "adrfam": "ipv4", 00:21:15.226 "trsvcid": "$NVMF_PORT", 00:21:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.226 "hdgst": ${hdgst:-false}, 00:21:15.226 "ddgst": ${ddgst:-false} 00:21:15.226 }, 00:21:15.226 "method": "bdev_nvme_attach_controller" 00:21:15.226 } 00:21:15.226 EOF 00:21:15.226 )") 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.226 { 00:21:15.226 "params": { 00:21:15.226 "name": "Nvme$subsystem", 00:21:15.226 "trtype": "$TEST_TRANSPORT", 00:21:15.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.226 "adrfam": "ipv4", 00:21:15.226 "trsvcid": "$NVMF_PORT", 00:21:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.226 "hdgst": ${hdgst:-false}, 00:21:15.226 "ddgst": ${ddgst:-false} 00:21:15.226 }, 00:21:15.226 "method": "bdev_nvme_attach_controller" 00:21:15.226 } 00:21:15.226 EOF 00:21:15.226 )") 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.226 { 00:21:15.226 "params": { 00:21:15.226 "name": "Nvme$subsystem", 00:21:15.226 "trtype": "$TEST_TRANSPORT", 00:21:15.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.226 "adrfam": "ipv4", 00:21:15.226 "trsvcid": "$NVMF_PORT", 00:21:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.226 "hdgst": ${hdgst:-false}, 00:21:15.226 "ddgst": ${ddgst:-false} 00:21:15.226 }, 00:21:15.226 "method": "bdev_nvme_attach_controller" 00:21:15.226 } 00:21:15.226 EOF 00:21:15.226 )") 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.226 { 00:21:15.226 "params": { 00:21:15.226 "name": "Nvme$subsystem", 00:21:15.226 "trtype": "$TEST_TRANSPORT", 00:21:15.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.226 "adrfam": "ipv4", 00:21:15.226 "trsvcid": "$NVMF_PORT", 00:21:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.226 "hdgst": ${hdgst:-false}, 00:21:15.226 "ddgst": ${ddgst:-false} 00:21:15.226 }, 00:21:15.226 "method": "bdev_nvme_attach_controller" 00:21:15.226 } 00:21:15.226 EOF 00:21:15.226 )") 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.226 { 00:21:15.226 "params": { 00:21:15.226 "name": "Nvme$subsystem", 00:21:15.226 "trtype": "$TEST_TRANSPORT", 00:21:15.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.226 "adrfam": "ipv4", 00:21:15.226 "trsvcid": "$NVMF_PORT", 00:21:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.226 "hdgst": ${hdgst:-false}, 00:21:15.226 "ddgst": ${ddgst:-false} 00:21:15.226 }, 00:21:15.226 "method": "bdev_nvme_attach_controller" 00:21:15.226 } 00:21:15.226 EOF 00:21:15.226 )") 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.226 { 00:21:15.226 "params": { 00:21:15.226 "name": "Nvme$subsystem", 00:21:15.226 "trtype": "$TEST_TRANSPORT", 00:21:15.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.226 "adrfam": "ipv4", 00:21:15.226 "trsvcid": "$NVMF_PORT", 00:21:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.226 "hdgst": ${hdgst:-false}, 00:21:15.226 "ddgst": ${ddgst:-false} 00:21:15.226 }, 00:21:15.226 "method": "bdev_nvme_attach_controller" 00:21:15.226 } 00:21:15.226 EOF 00:21:15.226 )") 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:15.226 [2024-11-27 08:04:09.107686] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:21:15.226 [2024-11-27 08:04:09.107733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505631 ] 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.226 { 00:21:15.226 "params": { 00:21:15.226 "name": "Nvme$subsystem", 00:21:15.226 "trtype": "$TEST_TRANSPORT", 00:21:15.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.226 "adrfam": "ipv4", 00:21:15.226 "trsvcid": "$NVMF_PORT", 00:21:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.226 "hdgst": ${hdgst:-false}, 00:21:15.226 "ddgst": ${ddgst:-false} 00:21:15.226 }, 00:21:15.226 "method": "bdev_nvme_attach_controller" 00:21:15.226 } 00:21:15.226 EOF 00:21:15.226 )") 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.226 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.226 { 00:21:15.226 "params": { 00:21:15.226 "name": "Nvme$subsystem", 00:21:15.226 "trtype": "$TEST_TRANSPORT", 00:21:15.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.226 "adrfam": "ipv4", 00:21:15.227 "trsvcid": "$NVMF_PORT", 00:21:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.227 "hdgst": ${hdgst:-false}, 00:21:15.227 "ddgst": ${ddgst:-false} 00:21:15.227 }, 00:21:15.227 "method": "bdev_nvme_attach_controller" 00:21:15.227 } 00:21:15.227 EOF 00:21:15.227 )") 00:21:15.227 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:15.227 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.227 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.227 { 00:21:15.227 "params": { 00:21:15.227 "name": "Nvme$subsystem", 00:21:15.227 "trtype": "$TEST_TRANSPORT", 00:21:15.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.227 "adrfam": "ipv4", 00:21:15.227 "trsvcid": "$NVMF_PORT", 00:21:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.227 "hdgst": ${hdgst:-false}, 00:21:15.227 "ddgst": ${ddgst:-false} 00:21:15.227 }, 00:21:15.227 "method": "bdev_nvme_attach_controller" 00:21:15.227 } 00:21:15.227 EOF 00:21:15.227 )") 00:21:15.227 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:15.227 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:15.227 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:15.227 08:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:15.227 "params": { 00:21:15.227 "name": "Nvme1", 00:21:15.227 "trtype": "tcp", 00:21:15.227 "traddr": "10.0.0.2", 00:21:15.227 "adrfam": "ipv4", 00:21:15.227 "trsvcid": "4420", 00:21:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:15.227 "hdgst": false, 00:21:15.227 "ddgst": false 00:21:15.227 }, 00:21:15.227 "method": "bdev_nvme_attach_controller" 00:21:15.227 },{ 00:21:15.227 "params": { 00:21:15.227 "name": "Nvme2", 00:21:15.227 "trtype": "tcp", 00:21:15.227 "traddr": "10.0.0.2", 00:21:15.227 "adrfam": "ipv4", 00:21:15.227 "trsvcid": "4420", 00:21:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:15.227 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:15.227 "hdgst": false, 00:21:15.227 "ddgst": false 00:21:15.227 }, 00:21:15.227 "method": "bdev_nvme_attach_controller" 00:21:15.227 },{ 00:21:15.227 "params": { 00:21:15.227 "name": "Nvme3", 00:21:15.227 "trtype": "tcp", 00:21:15.227 "traddr": "10.0.0.2", 00:21:15.227 "adrfam": "ipv4", 00:21:15.227 "trsvcid": "4420", 00:21:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:15.227 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:15.227 "hdgst": false, 00:21:15.227 "ddgst": false 00:21:15.227 }, 00:21:15.227 "method": "bdev_nvme_attach_controller" 00:21:15.227 },{ 00:21:15.227 "params": { 00:21:15.227 "name": "Nvme4", 00:21:15.227 "trtype": "tcp", 00:21:15.227 "traddr": "10.0.0.2", 00:21:15.227 "adrfam": "ipv4", 00:21:15.227 "trsvcid": "4420", 00:21:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:15.227 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:15.227 "hdgst": false, 00:21:15.227 "ddgst": false 00:21:15.227 }, 00:21:15.227 "method": "bdev_nvme_attach_controller" 00:21:15.227 },{ 00:21:15.227 "params": { 00:21:15.227 "name": "Nvme5", 00:21:15.227 "trtype": "tcp", 00:21:15.227 "traddr": "10.0.0.2", 00:21:15.227 "adrfam": "ipv4", 00:21:15.227 "trsvcid": "4420", 00:21:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:15.227 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:15.227 "hdgst": false, 00:21:15.227 "ddgst": false 00:21:15.227 }, 00:21:15.227 "method": "bdev_nvme_attach_controller" 00:21:15.227 },{ 00:21:15.227 "params": { 00:21:15.227 "name": "Nvme6", 00:21:15.227 "trtype": "tcp", 00:21:15.227 "traddr": "10.0.0.2", 00:21:15.227 "adrfam": "ipv4", 00:21:15.227 "trsvcid": "4420", 00:21:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:15.227 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:15.227 "hdgst": false, 00:21:15.227 "ddgst": false 00:21:15.227 }, 00:21:15.227 "method": "bdev_nvme_attach_controller" 00:21:15.227 },{ 00:21:15.227 "params": { 00:21:15.227 "name": "Nvme7", 00:21:15.227 "trtype": "tcp", 00:21:15.227 "traddr": "10.0.0.2", 00:21:15.227 "adrfam": "ipv4", 00:21:15.227 "trsvcid": "4420", 00:21:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:15.227 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:15.227 "hdgst": false, 00:21:15.227 "ddgst": false 00:21:15.227 }, 00:21:15.227 "method": "bdev_nvme_attach_controller" 00:21:15.227 },{ 00:21:15.227 "params": { 00:21:15.227 "name": "Nvme8", 00:21:15.227 "trtype": "tcp", 00:21:15.227 "traddr": "10.0.0.2", 00:21:15.227 "adrfam": "ipv4", 00:21:15.227 "trsvcid": "4420", 00:21:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:15.227 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:15.227 "hdgst": false, 00:21:15.227 "ddgst": false 00:21:15.227 }, 00:21:15.227 "method": "bdev_nvme_attach_controller" 00:21:15.227 },{ 00:21:15.227 "params": { 00:21:15.227 "name": "Nvme9", 00:21:15.227 "trtype": "tcp", 00:21:15.227 "traddr": "10.0.0.2", 00:21:15.227 "adrfam": "ipv4", 00:21:15.227 "trsvcid": "4420", 00:21:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:15.227 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:15.227 "hdgst": false, 00:21:15.227 "ddgst": false 00:21:15.227 }, 00:21:15.227 "method": "bdev_nvme_attach_controller" 00:21:15.227 },{ 00:21:15.227 "params": { 00:21:15.227 "name": "Nvme10", 00:21:15.227 "trtype": "tcp", 00:21:15.227 "traddr": "10.0.0.2", 00:21:15.227 "adrfam": "ipv4", 00:21:15.227 "trsvcid": "4420", 00:21:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:15.227 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:15.227 "hdgst": false, 00:21:15.227 "ddgst": false 00:21:15.227 }, 00:21:15.227 "method": "bdev_nvme_attach_controller" 00:21:15.227 }' 00:21:15.227 [2024-11-27 08:04:09.171787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.227 [2024-11-27 08:04:09.213749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.604 Running I/O for 1 seconds... 00:21:17.798 2185.00 IOPS, 136.56 MiB/s 00:21:17.798 Latency(us) 00:21:17.798 [2024-11-27T07:04:11.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.798 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.798 Verification LBA range: start 0x0 length 0x400 00:21:17.798 Nvme1n1 : 1.14 279.62 17.48 0.00 0.00 224272.21 16412.49 219745.06 00:21:17.798 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.798 Verification LBA range: start 0x0 length 0x400 00:21:17.798 Nvme2n1 : 1.03 248.00 15.50 0.00 0.00 251576.77 17210.32 237069.36 00:21:17.798 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.798 Verification LBA range: start 0x0 length 0x400 00:21:17.798 Nvme3n1 : 1.14 281.13 17.57 0.00 0.00 219274.73 14930.81 217921.45 00:21:17.798 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.798 Verification LBA range: start 0x0 length 0x400 00:21:17.799 Nvme4n1 : 1.12 293.44 18.34 0.00 0.00 199175.04 8605.16 206067.98 00:21:17.799 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.799 Verification LBA range: start 0x0 length 0x400 00:21:17.799 Nvme5n1 : 1.15 277.07 17.32 0.00 0.00 214913.20 17780.20 223392.28 00:21:17.799 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.799 Verification LBA range: start 0x0 length 0x400 00:21:17.799 Nvme6n1 : 1.13 226.21 14.14 0.00 0.00 260593.98 19717.79 230686.72 00:21:17.799 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.799 Verification LBA range: start 0x0 length 0x400 00:21:17.799 Nvme7n1 : 1.15 277.29 17.33 0.00 0.00 209735.59 14303.94 226127.69 00:21:17.799 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.799 Verification LBA range: start 0x0 length 0x400 00:21:17.799 Nvme8n1 : 1.15 279.43 17.46 0.00 0.00 204439.37 16298.52 220656.86 00:21:17.799 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.799 Verification LBA range: start 0x0 length 0x400 00:21:17.799 Nvme9n1 : 1.16 275.53 17.22 0.00 0.00 204758.86 18008.15 246187.41 00:21:17.799 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.799 Verification LBA range: start 0x0 length 0x400 00:21:17.799 Nvme10n1 : 1.16 276.00 17.25 0.00 0.00 201318.58 13392.14 227951.30 00:21:17.799 [2024-11-27T07:04:11.908Z] =================================================================================================================== 00:21:17.799 [2024-11-27T07:04:11.908Z] Total : 2713.73 169.61 0.00 0.00 217401.52 8605.16 246187.41 00:21:17.799 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:17.799 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:17.799 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:17.799 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:17.799 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:17.799 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:17.799 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:17.799 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:17.799 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:17.799 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:17.799 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:17.799 rmmod nvme_tcp 00:21:17.799 rmmod nvme_fabrics 00:21:17.799 rmmod nvme_keyring 00:21:18.058 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:18.058 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:18.058 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:18.058 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2504925 ']' 00:21:18.058 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2504925 00:21:18.058 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2504925 ']' 00:21:18.058 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2504925 00:21:18.058 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:18.058 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:18.058 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2504925 00:21:18.058 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:18.058 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:18.058 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2504925' 00:21:18.058 killing process with pid 2504925 00:21:18.058 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2504925 00:21:18.058 08:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2504925 00:21:18.317 08:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:18.317 08:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:18.317 08:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:18.317 08:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:18.317 08:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:18.317 08:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:18.317 08:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:18.317 08:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:18.317 08:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:18.317 08:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.317 08:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:18.317 08:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:20.853 00:21:20.853 real 0m14.680s 00:21:20.853 user 0m33.198s 00:21:20.853 sys 0m5.516s 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:20.853 ************************************ 00:21:20.853 END TEST nvmf_shutdown_tc1 00:21:20.853 ************************************ 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:20.853 ************************************ 00:21:20.853 START TEST nvmf_shutdown_tc2 00:21:20.853 ************************************ 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:20.853 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:20.854 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:20.854 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:20.854 Found net devices under 0000:86:00.0: cvl_0_0 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:20.854 Found net devices under 0000:86:00.1: cvl_0_1 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:20.854 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:20.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:20.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:21:20.855 00:21:20.855 --- 10.0.0.2 ping statistics --- 00:21:20.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.855 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:20.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:20.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:21:20.855 00:21:20.855 --- 10.0.0.1 ping statistics --- 00:21:20.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.855 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2506648 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2506648 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2506648 ']' 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:20.855 08:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:20.855 [2024-11-27 08:04:14.797607] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:21:20.855 [2024-11-27 08:04:14.797651] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.855 [2024-11-27 08:04:14.863735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:20.855 [2024-11-27 08:04:14.906491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.855 [2024-11-27 08:04:14.906530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.855 [2024-11-27 08:04:14.906537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.855 [2024-11-27 08:04:14.906543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.855 [2024-11-27 08:04:14.906549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.855 [2024-11-27 08:04:14.908245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.855 [2024-11-27 08:04:14.908322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:20.855 [2024-11-27 08:04:14.908443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.855 [2024-11-27 08:04:14.908444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.115 [2024-11-27 08:04:15.047091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.115 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.115 Malloc1 00:21:21.115 [2024-11-27 08:04:15.159090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.115 Malloc2 00:21:21.115 Malloc3 00:21:21.374 Malloc4 00:21:21.374 Malloc5 00:21:21.374 Malloc6 00:21:21.374 Malloc7 00:21:21.374 Malloc8 00:21:21.634 Malloc9 00:21:21.634 Malloc10 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2506922 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2506922 /var/tmp/bdevperf.sock 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2506922 ']' 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.634 { 00:21:21.634 "params": { 00:21:21.634 "name": "Nvme$subsystem", 00:21:21.634 "trtype": "$TEST_TRANSPORT", 00:21:21.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.634 "adrfam": "ipv4", 00:21:21.634 "trsvcid": "$NVMF_PORT", 00:21:21.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.634 "hdgst": ${hdgst:-false}, 00:21:21.634 "ddgst": ${ddgst:-false} 00:21:21.634 }, 00:21:21.634 "method": "bdev_nvme_attach_controller" 00:21:21.634 } 00:21:21.634 EOF 00:21:21.634 )") 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.634 { 00:21:21.634 "params": { 00:21:21.634 "name": "Nvme$subsystem", 00:21:21.634 "trtype": "$TEST_TRANSPORT", 00:21:21.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.634 "adrfam": "ipv4", 00:21:21.634 "trsvcid": "$NVMF_PORT", 00:21:21.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.634 "hdgst": ${hdgst:-false}, 00:21:21.634 "ddgst": ${ddgst:-false} 00:21:21.634 }, 00:21:21.634 "method": "bdev_nvme_attach_controller" 00:21:21.634 } 00:21:21.634 EOF 00:21:21.634 )") 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.634 { 00:21:21.634 "params": { 00:21:21.634 "name": "Nvme$subsystem", 00:21:21.634 "trtype": "$TEST_TRANSPORT", 00:21:21.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.634 "adrfam": "ipv4", 00:21:21.634 "trsvcid": "$NVMF_PORT", 00:21:21.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.634 "hdgst": ${hdgst:-false}, 00:21:21.634 "ddgst": ${ddgst:-false} 00:21:21.634 }, 00:21:21.634 "method": "bdev_nvme_attach_controller" 00:21:21.634 } 00:21:21.634 EOF 00:21:21.634 )") 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.634 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.634 { 00:21:21.634 "params": { 00:21:21.634 "name": "Nvme$subsystem", 00:21:21.635 "trtype": "$TEST_TRANSPORT", 00:21:21.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.635 "adrfam": "ipv4", 00:21:21.635 "trsvcid": "$NVMF_PORT", 00:21:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.635 "hdgst": ${hdgst:-false}, 00:21:21.635 "ddgst": ${ddgst:-false} 00:21:21.635 }, 00:21:21.635 "method": "bdev_nvme_attach_controller" 00:21:21.635 } 00:21:21.635 EOF 00:21:21.635 )") 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.635 { 00:21:21.635 "params": { 00:21:21.635 "name": "Nvme$subsystem", 00:21:21.635 "trtype": "$TEST_TRANSPORT", 00:21:21.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.635 "adrfam": "ipv4", 00:21:21.635 "trsvcid": "$NVMF_PORT", 00:21:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.635 "hdgst": ${hdgst:-false}, 00:21:21.635 "ddgst": ${ddgst:-false} 00:21:21.635 }, 00:21:21.635 "method": "bdev_nvme_attach_controller" 00:21:21.635 } 00:21:21.635 EOF 00:21:21.635 )") 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.635 { 00:21:21.635 "params": { 00:21:21.635 "name": "Nvme$subsystem", 00:21:21.635 "trtype": "$TEST_TRANSPORT", 00:21:21.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.635 "adrfam": "ipv4", 00:21:21.635 "trsvcid": "$NVMF_PORT", 00:21:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.635 "hdgst": ${hdgst:-false}, 00:21:21.635 "ddgst": ${ddgst:-false} 00:21:21.635 }, 00:21:21.635 "method": "bdev_nvme_attach_controller" 00:21:21.635 } 00:21:21.635 EOF 00:21:21.635 )") 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.635 { 00:21:21.635 "params": { 00:21:21.635 "name": "Nvme$subsystem", 00:21:21.635 "trtype": "$TEST_TRANSPORT", 00:21:21.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.635 "adrfam": "ipv4", 00:21:21.635 "trsvcid": "$NVMF_PORT", 00:21:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.635 "hdgst": ${hdgst:-false}, 00:21:21.635 "ddgst": ${ddgst:-false} 00:21:21.635 }, 00:21:21.635 "method": "bdev_nvme_attach_controller" 00:21:21.635 } 00:21:21.635 EOF 00:21:21.635 )") 00:21:21.635 [2024-11-27 08:04:15.634313] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:21:21.635 [2024-11-27 08:04:15.634361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2506922 ] 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.635 { 00:21:21.635 "params": { 00:21:21.635 "name": "Nvme$subsystem", 00:21:21.635 "trtype": "$TEST_TRANSPORT", 00:21:21.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.635 "adrfam": "ipv4", 00:21:21.635 "trsvcid": "$NVMF_PORT", 00:21:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.635 "hdgst": ${hdgst:-false}, 00:21:21.635 "ddgst": ${ddgst:-false} 00:21:21.635 }, 00:21:21.635 "method": "bdev_nvme_attach_controller" 00:21:21.635 } 00:21:21.635 EOF 00:21:21.635 )") 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.635 { 00:21:21.635 "params": { 00:21:21.635 "name": "Nvme$subsystem", 00:21:21.635 "trtype": "$TEST_TRANSPORT", 00:21:21.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.635 "adrfam": "ipv4", 00:21:21.635 "trsvcid": "$NVMF_PORT", 00:21:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.635 "hdgst": ${hdgst:-false}, 00:21:21.635 "ddgst": ${ddgst:-false} 00:21:21.635 }, 00:21:21.635 "method": "bdev_nvme_attach_controller" 00:21:21.635 } 00:21:21.635 EOF 00:21:21.635 )") 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.635 { 00:21:21.635 "params": { 00:21:21.635 "name": "Nvme$subsystem", 00:21:21.635 "trtype": "$TEST_TRANSPORT", 00:21:21.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.635 "adrfam": "ipv4", 00:21:21.635 "trsvcid": "$NVMF_PORT", 00:21:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.635 "hdgst": ${hdgst:-false}, 00:21:21.635 "ddgst": ${ddgst:-false} 00:21:21.635 }, 00:21:21.635 "method": "bdev_nvme_attach_controller" 00:21:21.635 } 00:21:21.635 EOF 00:21:21.635 )") 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:21.635 08:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:21.635 "params": { 00:21:21.635 "name": "Nvme1", 00:21:21.635 "trtype": "tcp", 00:21:21.635 "traddr": "10.0.0.2", 00:21:21.635 "adrfam": "ipv4", 00:21:21.635 "trsvcid": "4420", 00:21:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.635 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.635 "hdgst": false, 00:21:21.635 "ddgst": false 00:21:21.635 }, 00:21:21.635 "method": "bdev_nvme_attach_controller" 00:21:21.635 },{ 00:21:21.635 "params": { 00:21:21.635 "name": "Nvme2", 00:21:21.635 "trtype": "tcp", 00:21:21.635 "traddr": "10.0.0.2", 00:21:21.635 "adrfam": "ipv4", 00:21:21.635 "trsvcid": "4420", 00:21:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:21.635 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:21.635 "hdgst": false, 00:21:21.635 "ddgst": false 00:21:21.635 }, 00:21:21.635 "method": "bdev_nvme_attach_controller" 00:21:21.635 },{ 00:21:21.635 "params": { 00:21:21.635 "name": "Nvme3", 00:21:21.635 "trtype": "tcp", 00:21:21.635 "traddr": "10.0.0.2", 00:21:21.635 "adrfam": "ipv4", 00:21:21.635 "trsvcid": "4420", 00:21:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:21.635 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:21.635 "hdgst": false, 00:21:21.635 "ddgst": false 00:21:21.635 }, 00:21:21.635 "method": "bdev_nvme_attach_controller" 00:21:21.635 },{ 00:21:21.635 "params": { 00:21:21.635 "name": "Nvme4", 00:21:21.635 "trtype": "tcp", 00:21:21.635 "traddr": "10.0.0.2", 00:21:21.635 "adrfam": "ipv4", 00:21:21.635 "trsvcid": "4420", 00:21:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:21.635 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:21.635 "hdgst": false, 00:21:21.635 "ddgst": false 00:21:21.635 }, 00:21:21.635 "method": "bdev_nvme_attach_controller" 00:21:21.635 },{ 00:21:21.635 "params": { 00:21:21.635 "name": "Nvme5", 00:21:21.635 "trtype": "tcp", 00:21:21.635 "traddr": "10.0.0.2", 00:21:21.635 "adrfam": "ipv4", 00:21:21.635 "trsvcid": "4420", 00:21:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:21.635 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:21.635 "hdgst": false, 00:21:21.635 "ddgst": false 00:21:21.635 }, 00:21:21.635 "method": "bdev_nvme_attach_controller" 00:21:21.635 },{ 00:21:21.635 "params": { 00:21:21.635 "name": "Nvme6", 00:21:21.635 "trtype": "tcp", 00:21:21.636 "traddr": "10.0.0.2", 00:21:21.636 "adrfam": "ipv4", 00:21:21.636 "trsvcid": "4420", 00:21:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:21.636 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:21.636 "hdgst": false, 00:21:21.636 "ddgst": false 00:21:21.636 }, 00:21:21.636 "method": "bdev_nvme_attach_controller" 00:21:21.636 },{ 00:21:21.636 "params": { 00:21:21.636 "name": "Nvme7", 00:21:21.636 "trtype": "tcp", 00:21:21.636 "traddr": "10.0.0.2", 00:21:21.636 "adrfam": "ipv4", 00:21:21.636 "trsvcid": "4420", 00:21:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:21.636 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:21.636 "hdgst": false, 00:21:21.636 "ddgst": false 00:21:21.636 }, 00:21:21.636 "method": "bdev_nvme_attach_controller" 00:21:21.636 },{ 00:21:21.636 "params": { 00:21:21.636 "name": "Nvme8", 00:21:21.636 "trtype": "tcp", 00:21:21.636 "traddr": "10.0.0.2", 00:21:21.636 "adrfam": "ipv4", 00:21:21.636 "trsvcid": "4420", 00:21:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:21.636 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:21.636 "hdgst": false, 00:21:21.636 "ddgst": false 00:21:21.636 }, 00:21:21.636 "method": "bdev_nvme_attach_controller" 00:21:21.636 },{ 00:21:21.636 "params": { 00:21:21.636 "name": "Nvme9", 00:21:21.636 "trtype": "tcp", 00:21:21.636 "traddr": "10.0.0.2", 00:21:21.636 "adrfam": "ipv4", 00:21:21.636 "trsvcid": "4420", 00:21:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:21.636 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:21.636 "hdgst": false, 00:21:21.636 "ddgst": false 00:21:21.636 }, 00:21:21.636 "method": "bdev_nvme_attach_controller" 00:21:21.636 },{ 00:21:21.636 "params": { 00:21:21.636 "name": "Nvme10", 00:21:21.636 "trtype": "tcp", 00:21:21.636 "traddr": "10.0.0.2", 00:21:21.636 "adrfam": "ipv4", 00:21:21.636 "trsvcid": "4420", 00:21:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:21.636 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:21.636 "hdgst": false, 00:21:21.636 "ddgst": false 00:21:21.636 }, 00:21:21.636 "method": "bdev_nvme_attach_controller" 00:21:21.636 }' 00:21:21.636 [2024-11-27 08:04:15.697330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.636 [2024-11-27 08:04:15.738941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.539 Running I/O for 10 seconds... 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:23.539 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:23.798 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:23.798 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:23.798 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:23.798 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:23.798 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.798 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:23.798 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.798 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:23.798 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:23.798 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:23.798 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:23.798 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:23.798 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2506922 00:21:23.798 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2506922 ']' 00:21:23.798 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2506922 00:21:23.798 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:23.798 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:23.798 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2506922 00:21:24.057 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:24.057 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:24.057 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2506922' 00:21:24.057 killing process with pid 2506922 00:21:24.057 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2506922 00:21:24.057 08:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2506922 00:21:24.057 Received shutdown signal, test time was about 0.651720 seconds 00:21:24.057 00:21:24.057 Latency(us) 00:21:24.057 [2024-11-27T07:04:18.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.057 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.057 Verification LBA range: start 0x0 length 0x400 00:21:24.057 Nvme1n1 : 0.63 303.96 19.00 0.00 0.00 206449.38 24618.74 193302.71 00:21:24.057 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.057 Verification LBA range: start 0x0 length 0x400 00:21:24.057 Nvme2n1 : 0.64 301.18 18.82 0.00 0.00 203126.28 15956.59 199685.34 00:21:24.057 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.057 Verification LBA range: start 0x0 length 0x400 00:21:24.057 Nvme3n1 : 0.63 303.07 18.94 0.00 0.00 196890.56 34648.60 187831.87 00:21:24.057 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.057 Verification LBA range: start 0x0 length 0x400 00:21:24.057 Nvme4n1 : 0.64 299.94 18.75 0.00 0.00 194295.54 15044.79 220656.86 00:21:24.057 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.057 Verification LBA range: start 0x0 length 0x400 00:21:24.057 Nvme5n1 : 0.65 294.91 18.43 0.00 0.00 192658.62 16640.45 218833.25 00:21:24.057 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.057 Verification LBA range: start 0x0 length 0x400 00:21:24.057 Nvme6n1 : 0.65 296.69 18.54 0.00 0.00 185519.12 19261.89 212450.62 00:21:24.057 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.057 Verification LBA range: start 0x0 length 0x400 00:21:24.057 Nvme7n1 : 0.65 297.05 18.57 0.00 0.00 180478.81 14531.90 217921.45 00:21:24.057 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.057 Verification LBA range: start 0x0 length 0x400 00:21:24.057 Nvme8n1 : 0.61 209.43 13.09 0.00 0.00 245304.77 14189.97 215186.03 00:21:24.057 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.057 Verification LBA range: start 0x0 length 0x400 00:21:24.057 Nvme9n1 : 0.63 202.91 12.68 0.00 0.00 245674.07 18464.06 244363.80 00:21:24.057 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.057 Verification LBA range: start 0x0 length 0x400 00:21:24.057 Nvme10n1 : 0.62 206.33 12.90 0.00 0.00 234293.87 31913.18 224304.08 00:21:24.057 [2024-11-27T07:04:18.166Z] =================================================================================================================== 00:21:24.057 [2024-11-27T07:04:18.166Z] Total : 2715.46 169.72 0.00 0.00 204770.38 14189.97 244363.80 00:21:24.316 08:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2506648 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:25.258 rmmod nvme_tcp 00:21:25.258 rmmod nvme_fabrics 00:21:25.258 rmmod nvme_keyring 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2506648 ']' 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2506648 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2506648 ']' 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2506648 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2506648 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2506648' 00:21:25.258 killing process with pid 2506648 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2506648 00:21:25.258 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2506648 00:21:25.827 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:25.827 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:25.827 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:25.827 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:25.827 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:25.827 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:25.827 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:25.827 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:25.827 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:25.827 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.827 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.827 08:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:27.733 00:21:27.733 real 0m7.259s 00:21:27.733 user 0m21.446s 00:21:27.733 sys 0m1.252s 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.733 ************************************ 00:21:27.733 END TEST nvmf_shutdown_tc2 00:21:27.733 ************************************ 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:27.733 ************************************ 00:21:27.733 START TEST nvmf_shutdown_tc3 00:21:27.733 ************************************ 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:27.733 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:27.734 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:27.734 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:27.734 Found net devices under 0000:86:00.0: cvl_0_0 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:27.734 Found net devices under 0000:86:00.1: cvl_0_1 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:27.734 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:27.994 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.994 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.994 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.994 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.994 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:27.994 08:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:27.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:21:27.994 00:21:27.994 --- 10.0.0.2 ping statistics --- 00:21:27.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.994 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:21:27.994 00:21:27.994 --- 10.0.0.1 ping statistics --- 00:21:27.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.994 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2507969 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2507969 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2507969 ']' 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:27.994 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:28.253 [2024-11-27 08:04:22.150609] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:21:28.253 [2024-11-27 08:04:22.150654] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.253 [2024-11-27 08:04:22.217662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:28.253 [2024-11-27 08:04:22.260731] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.253 [2024-11-27 08:04:22.260770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.253 [2024-11-27 08:04:22.260777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.253 [2024-11-27 08:04:22.260783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.253 [2024-11-27 08:04:22.260789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.253 [2024-11-27 08:04:22.262302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.253 [2024-11-27 08:04:22.262391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:28.253 [2024-11-27 08:04:22.262498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.253 [2024-11-27 08:04:22.262498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:28.253 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.253 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:28.253 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:28.253 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:28.253 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:28.512 [2024-11-27 08:04:22.401203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.512 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:28.512 Malloc1 00:21:28.512 [2024-11-27 08:04:22.518407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.512 Malloc2 00:21:28.512 Malloc3 00:21:28.772 Malloc4 00:21:28.772 Malloc5 00:21:28.772 Malloc6 00:21:28.772 Malloc7 00:21:28.772 Malloc8 00:21:28.772 Malloc9 00:21:29.031 Malloc10 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2508237 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2508237 /var/tmp/bdevperf.sock 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2508237 ']' 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.031 { 00:21:29.031 "params": { 00:21:29.031 "name": "Nvme$subsystem", 00:21:29.031 "trtype": "$TEST_TRANSPORT", 00:21:29.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.031 "adrfam": "ipv4", 00:21:29.031 "trsvcid": "$NVMF_PORT", 00:21:29.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.031 "hdgst": ${hdgst:-false}, 00:21:29.031 "ddgst": ${ddgst:-false} 00:21:29.031 }, 00:21:29.031 "method": "bdev_nvme_attach_controller" 00:21:29.031 } 00:21:29.031 EOF 00:21:29.031 )") 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.031 { 00:21:29.031 "params": { 00:21:29.031 "name": "Nvme$subsystem", 00:21:29.031 "trtype": "$TEST_TRANSPORT", 00:21:29.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.031 "adrfam": "ipv4", 00:21:29.031 "trsvcid": "$NVMF_PORT", 00:21:29.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.031 "hdgst": ${hdgst:-false}, 00:21:29.031 "ddgst": ${ddgst:-false} 00:21:29.031 }, 00:21:29.031 "method": "bdev_nvme_attach_controller" 00:21:29.031 } 00:21:29.031 EOF 00:21:29.031 )") 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.031 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.031 { 00:21:29.031 "params": { 00:21:29.031 "name": "Nvme$subsystem", 00:21:29.031 "trtype": "$TEST_TRANSPORT", 00:21:29.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.031 "adrfam": "ipv4", 00:21:29.031 "trsvcid": "$NVMF_PORT", 00:21:29.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.031 "hdgst": ${hdgst:-false}, 00:21:29.031 "ddgst": ${ddgst:-false} 00:21:29.031 }, 00:21:29.031 "method": "bdev_nvme_attach_controller" 00:21:29.031 } 00:21:29.032 EOF 00:21:29.032 )") 00:21:29.032 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.032 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.032 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.032 { 00:21:29.032 "params": { 00:21:29.032 "name": "Nvme$subsystem", 00:21:29.032 "trtype": "$TEST_TRANSPORT", 00:21:29.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.032 "adrfam": "ipv4", 00:21:29.032 "trsvcid": "$NVMF_PORT", 00:21:29.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.032 "hdgst": ${hdgst:-false}, 00:21:29.032 "ddgst": ${ddgst:-false} 00:21:29.032 }, 00:21:29.032 "method": "bdev_nvme_attach_controller" 00:21:29.032 } 00:21:29.032 EOF 00:21:29.032 )") 00:21:29.032 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.032 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.032 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.032 { 00:21:29.032 "params": { 00:21:29.032 "name": "Nvme$subsystem", 00:21:29.032 "trtype": "$TEST_TRANSPORT", 00:21:29.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.032 "adrfam": "ipv4", 00:21:29.032 "trsvcid": "$NVMF_PORT", 00:21:29.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.032 "hdgst": ${hdgst:-false}, 00:21:29.032 "ddgst": ${ddgst:-false} 00:21:29.032 }, 00:21:29.032 "method": "bdev_nvme_attach_controller" 00:21:29.032 } 00:21:29.032 EOF 00:21:29.032 )") 00:21:29.032 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.032 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.032 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.032 { 00:21:29.032 "params": { 00:21:29.032 "name": "Nvme$subsystem", 00:21:29.032 "trtype": "$TEST_TRANSPORT", 00:21:29.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.032 "adrfam": "ipv4", 00:21:29.032 "trsvcid": "$NVMF_PORT", 00:21:29.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.032 "hdgst": ${hdgst:-false}, 00:21:29.032 "ddgst": ${ddgst:-false} 00:21:29.032 }, 00:21:29.032 "method": "bdev_nvme_attach_controller" 00:21:29.032 } 00:21:29.032 EOF 00:21:29.032 )") 00:21:29.032 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.032 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.032 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.032 { 00:21:29.032 "params": { 00:21:29.032 "name": "Nvme$subsystem", 00:21:29.032 "trtype": "$TEST_TRANSPORT", 00:21:29.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.032 "adrfam": "ipv4", 00:21:29.032 "trsvcid": "$NVMF_PORT", 00:21:29.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.032 "hdgst": ${hdgst:-false}, 00:21:29.032 "ddgst": ${ddgst:-false} 00:21:29.032 }, 00:21:29.032 "method": "bdev_nvme_attach_controller" 00:21:29.032 } 00:21:29.032 EOF 00:21:29.032 )") 00:21:29.032 [2024-11-27 08:04:22.988144] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:21:29.032 [2024-11-27 08:04:22.988191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2508237 ] 00:21:29.032 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.032 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.032 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.032 { 00:21:29.032 "params": { 00:21:29.032 "name": "Nvme$subsystem", 00:21:29.032 "trtype": "$TEST_TRANSPORT", 00:21:29.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.032 "adrfam": "ipv4", 00:21:29.032 "trsvcid": "$NVMF_PORT", 00:21:29.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.032 "hdgst": ${hdgst:-false}, 00:21:29.032 "ddgst": ${ddgst:-false} 00:21:29.032 }, 00:21:29.032 "method": "bdev_nvme_attach_controller" 00:21:29.032 } 00:21:29.032 EOF 00:21:29.032 )") 00:21:29.032 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.032 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.032 08:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.032 { 00:21:29.032 "params": { 00:21:29.032 "name": "Nvme$subsystem", 00:21:29.032 "trtype": "$TEST_TRANSPORT", 00:21:29.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.032 "adrfam": "ipv4", 00:21:29.032 "trsvcid": "$NVMF_PORT", 00:21:29.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.032 "hdgst": ${hdgst:-false}, 00:21:29.032 "ddgst": ${ddgst:-false} 00:21:29.032 }, 00:21:29.032 "method": "bdev_nvme_attach_controller" 00:21:29.032 } 00:21:29.032 EOF 00:21:29.032 )") 00:21:29.032 08:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.032 08:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.032 08:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.032 { 00:21:29.032 "params": { 00:21:29.032 "name": "Nvme$subsystem", 00:21:29.032 "trtype": "$TEST_TRANSPORT", 00:21:29.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.032 "adrfam": "ipv4", 00:21:29.032 "trsvcid": "$NVMF_PORT", 00:21:29.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.032 "hdgst": ${hdgst:-false}, 00:21:29.032 "ddgst": ${ddgst:-false} 00:21:29.032 }, 00:21:29.032 "method": "bdev_nvme_attach_controller" 00:21:29.032 } 00:21:29.032 EOF 00:21:29.032 )") 00:21:29.032 08:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.032 08:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:29.032 08:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:29.032 08:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:29.032 "params": { 00:21:29.032 "name": "Nvme1", 00:21:29.032 "trtype": "tcp", 00:21:29.032 "traddr": "10.0.0.2", 00:21:29.032 "adrfam": "ipv4", 00:21:29.032 "trsvcid": "4420", 00:21:29.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.032 "hdgst": false, 00:21:29.032 "ddgst": false 00:21:29.032 }, 00:21:29.032 "method": "bdev_nvme_attach_controller" 00:21:29.032 },{ 00:21:29.032 "params": { 00:21:29.032 "name": "Nvme2", 00:21:29.032 "trtype": "tcp", 00:21:29.032 "traddr": "10.0.0.2", 00:21:29.032 "adrfam": "ipv4", 00:21:29.032 "trsvcid": "4420", 00:21:29.032 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:29.032 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:29.032 "hdgst": false, 00:21:29.032 "ddgst": false 00:21:29.032 }, 00:21:29.032 "method": "bdev_nvme_attach_controller" 00:21:29.032 },{ 00:21:29.032 "params": { 00:21:29.032 "name": "Nvme3", 00:21:29.032 "trtype": "tcp", 00:21:29.032 "traddr": "10.0.0.2", 00:21:29.032 "adrfam": "ipv4", 00:21:29.032 "trsvcid": "4420", 00:21:29.032 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:29.032 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:29.032 "hdgst": false, 00:21:29.032 "ddgst": false 00:21:29.032 }, 00:21:29.032 "method": "bdev_nvme_attach_controller" 00:21:29.032 },{ 00:21:29.032 "params": { 00:21:29.032 "name": "Nvme4", 00:21:29.032 "trtype": "tcp", 00:21:29.032 "traddr": "10.0.0.2", 00:21:29.032 "adrfam": "ipv4", 00:21:29.032 "trsvcid": "4420", 00:21:29.032 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:29.032 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:29.032 "hdgst": false, 00:21:29.032 "ddgst": false 00:21:29.032 }, 00:21:29.032 "method": "bdev_nvme_attach_controller" 00:21:29.032 },{ 00:21:29.033 "params": { 00:21:29.033 "name": "Nvme5", 00:21:29.033 "trtype": "tcp", 00:21:29.033 "traddr": "10.0.0.2", 00:21:29.033 "adrfam": "ipv4", 00:21:29.033 "trsvcid": "4420", 00:21:29.033 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:29.033 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:29.033 "hdgst": false, 00:21:29.033 "ddgst": false 00:21:29.033 }, 00:21:29.033 "method": "bdev_nvme_attach_controller" 00:21:29.033 },{ 00:21:29.033 "params": { 00:21:29.033 "name": "Nvme6", 00:21:29.033 "trtype": "tcp", 00:21:29.033 "traddr": "10.0.0.2", 00:21:29.033 "adrfam": "ipv4", 00:21:29.033 "trsvcid": "4420", 00:21:29.033 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:29.033 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:29.033 "hdgst": false, 00:21:29.033 "ddgst": false 00:21:29.033 }, 00:21:29.033 "method": "bdev_nvme_attach_controller" 00:21:29.033 },{ 00:21:29.033 "params": { 00:21:29.033 "name": "Nvme7", 00:21:29.033 "trtype": "tcp", 00:21:29.033 "traddr": "10.0.0.2", 00:21:29.033 "adrfam": "ipv4", 00:21:29.033 "trsvcid": "4420", 00:21:29.033 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:29.033 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:29.033 "hdgst": false, 00:21:29.033 "ddgst": false 00:21:29.033 }, 00:21:29.033 "method": "bdev_nvme_attach_controller" 00:21:29.033 },{ 00:21:29.033 "params": { 00:21:29.033 "name": "Nvme8", 00:21:29.033 "trtype": "tcp", 00:21:29.033 "traddr": "10.0.0.2", 00:21:29.033 "adrfam": "ipv4", 00:21:29.033 "trsvcid": "4420", 00:21:29.033 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:29.033 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:29.033 "hdgst": false, 00:21:29.033 "ddgst": false 00:21:29.033 }, 00:21:29.033 "method": "bdev_nvme_attach_controller" 00:21:29.033 },{ 00:21:29.033 "params": { 00:21:29.033 "name": "Nvme9", 00:21:29.033 "trtype": "tcp", 00:21:29.033 "traddr": "10.0.0.2", 00:21:29.033 "adrfam": "ipv4", 00:21:29.033 "trsvcid": "4420", 00:21:29.033 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:29.033 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:29.033 "hdgst": false, 00:21:29.033 "ddgst": false 00:21:29.033 }, 00:21:29.033 "method": "bdev_nvme_attach_controller" 00:21:29.033 },{ 00:21:29.033 "params": { 00:21:29.033 "name": "Nvme10", 00:21:29.033 "trtype": "tcp", 00:21:29.033 "traddr": "10.0.0.2", 00:21:29.033 "adrfam": "ipv4", 00:21:29.033 "trsvcid": "4420", 00:21:29.033 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:29.033 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:29.033 "hdgst": false, 00:21:29.033 "ddgst": false 00:21:29.033 }, 00:21:29.033 "method": "bdev_nvme_attach_controller" 00:21:29.033 }' 00:21:29.033 [2024-11-27 08:04:23.054126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.033 [2024-11-27 08:04:23.095580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.940 Running I/O for 10 seconds... 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:30.940 08:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:31.199 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:31.199 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:31.199 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:31.199 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.199 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:31.199 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:31.199 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.199 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:31.199 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:31.200 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:31.200 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:31.200 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:31.200 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2507969 00:21:31.200 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2507969 ']' 00:21:31.200 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2507969 00:21:31.200 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:31.200 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.200 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2507969 00:21:31.478 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:31.478 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:31.478 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2507969' 00:21:31.478 killing process with pid 2507969 00:21:31.478 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2507969 00:21:31.478 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2507969 00:21:31.478 [2024-11-27 08:04:25.312245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.478 [2024-11-27 08:04:25.312303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.478 [2024-11-27 08:04:25.312311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.478 [2024-11-27 08:04:25.312319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.478 [2024-11-27 08:04:25.312326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.478 [2024-11-27 08:04:25.312332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.478 [2024-11-27 08:04:25.312338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.478 [2024-11-27 08:04:25.312345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.478 [2024-11-27 08:04:25.312352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.478 [2024-11-27 08:04:25.312358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.478 [2024-11-27 08:04:25.312364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.478 [2024-11-27 08:04:25.312371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.478 [2024-11-27 08:04:25.312378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.478 [2024-11-27 08:04:25.312384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.478 [2024-11-27 08:04:25.312391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.478 [2024-11-27 08:04:25.312398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.478 [2024-11-27 08:04:25.312406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.478 [2024-11-27 08:04:25.312412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.312706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138850 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.313994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.314000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.314009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.314017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.314023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.314030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.479 [2024-11-27 08:04:25.314037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.314195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b400 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.480 [2024-11-27 08:04:25.315720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.315727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.315733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.315739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138d20 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.317626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21391f0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.318662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.318687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.318695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.318702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.318708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.318715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.318722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.318729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.318736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.318742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.318748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.318755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.318765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.318772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.318780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.481 [2024-11-27 08:04:25.318787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.318996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.319003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.319009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.319018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.319024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.319030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.319036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.319042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.319049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.319057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.319065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.319071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.319078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.319084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.319090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.319096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.319102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.319108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21396e0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.482 [2024-11-27 08:04:25.320512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.320676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a0a0 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with [2024-11-27 08:04:25.321906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(6) to be set 00:21:31.483 id:0 cdw10:00000000 cdw11:00000000 00:21:31.483 [2024-11-27 08:04:25.321930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.483 [2024-11-27 08:04:25.321943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-11-27 08:04:25.321958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:31.483 the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with [2024-11-27 08:04:25.321969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:21:31.483 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.483 [2024-11-27 08:04:25.321977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.483 [2024-11-27 08:04:25.321979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.483 [2024-11-27 08:04:25.321985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.321987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.321993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.321996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe25d80 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-27 08:04:25.322048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:31.484 the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-11-27 08:04:25.322090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:31.484 the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with [2024-11-27 08:04:25.322101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:21:31.484 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8de610 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a570 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9be200 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9300 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde85b0 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ca1c0 is same with the state(6) to be set 00:21:31.484 [2024-11-27 08:04:25.322498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.484 [2024-11-27 08:04:25.322550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.484 [2024-11-27 08:04:25.322556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9d30 is same with the state(6) to be set 00:21:31.485 [2024-11-27 08:04:25.322578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.485 [2024-11-27 08:04:25.322587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.322595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.485 [2024-11-27 08:04:25.322601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.322609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.485 [2024-11-27 08:04:25.322615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.322623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.485 [2024-11-27 08:04:25.322631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.322640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2770 is same with the state(6) to be set 00:21:31.485 [2024-11-27 08:04:25.322891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.322909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.322923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.322930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.322939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.322953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.322963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.322971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.322979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.322973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213aa40 is same with the state(6) to be set 00:21:31.485 [2024-11-27 08:04:25.322987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.322995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213aa40 is same with the state(6) to be set 00:21:31.485 [2024-11-27 08:04:25.322997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213aa40 is same with the state(6) to be set 00:21:31.485 [2024-11-27 08:04:25.323006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213aa40 is same with the state(6) to be set 00:21:31.485 [2024-11-27 08:04:25.323016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.485 [2024-11-27 08:04:25.323298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:1[2024-11-27 08:04:25.323307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 the state(6) to be set 00:21:31.485 [2024-11-27 08:04:25.323318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-27 08:04:25.323318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 the state(6) to be set 00:21:31.485 [2024-11-27 08:04:25.323329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:1[2024-11-27 08:04:25.323330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 the state(6) to be set 00:21:31.485 [2024-11-27 08:04:25.323339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-27 08:04:25.323339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 the state(6) to be set 00:21:31.485 [2024-11-27 08:04:25.323350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.485 [2024-11-27 08:04:25.323351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.485 [2024-11-27 08:04:25.323359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.485 [2024-11-27 08:04:25.323369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.485 [2024-11-27 08:04:25.323372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.485 [2024-11-27 08:04:25.323377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.485 [2024-11-27 08:04:25.323380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:1[2024-11-27 08:04:25.323387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-27 08:04:25.323419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:1[2024-11-27 08:04:25.323483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-27 08:04:25.323493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:1[2024-11-27 08:04:25.323526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-27 08:04:25.323539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with [2024-11-27 08:04:25.323551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:1the state(6) to be set 00:21:31.486 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.486 [2024-11-27 08:04:25.323572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-11-27 08:04:25.323875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-11-27 08:04:25.323884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.323890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.323901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.323908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.323916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.323923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.323932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.323939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.323952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.323960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.323969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.323976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.323984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.323992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.325819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.325841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.325854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.325862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.325871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.325878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.325887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.325894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.325903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.325910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.325918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.325926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.325934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.325945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.325962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.325969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.325978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.325986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.325994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-11-27 08:04:25.326299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-11-27 08:04:25.326305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.326314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.326321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.326330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.326337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.326345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.326357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.326365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.326372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.326381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.326389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.326399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.326406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.326414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.326422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.326431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.326439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.326447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.326455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.326464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.326471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.326479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.331362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.331566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213af10 is same with the state(6) to be set 00:21:31.488 [2024-11-27 08:04:25.339676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.339697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.339707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.339719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.339728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.339741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.339753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.339765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.339776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.339789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.339798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.339811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.339820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.339832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.339842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.339853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.339863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.339875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.339886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.339898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.339907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.339919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.339930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.339942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.339957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.339969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.339979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-11-27 08:04:25.339990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-11-27 08:04:25.340000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-11-27 08:04:25.340021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-11-27 08:04:25.340044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-11-27 08:04:25.340066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-11-27 08:04:25.340087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-11-27 08:04:25.340108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-11-27 08:04:25.340130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-11-27 08:04:25.340153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-11-27 08:04:25.340174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-11-27 08:04:25.340196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-11-27 08:04:25.340218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:31.489 [2024-11-27 08:04:25.340462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2770 (9): Bad file descriptor 00:21:31.489 [2024-11-27 08:04:25.340509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.489 [2024-11-27 08:04:25.340521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.489 [2024-11-27 08:04:25.340543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.489 [2024-11-27 08:04:25.340567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.489 [2024-11-27 08:04:25.340587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe37780 is same with the state(6) to be set 00:21:31.489 [2024-11-27 08:04:25.340613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe25d80 (9): Bad file descriptor 00:21:31.489 [2024-11-27 08:04:25.340637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8de610 (9): Bad file descriptor 00:21:31.489 [2024-11-27 08:04:25.340658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9be200 (9): Bad file descriptor 00:21:31.489 [2024-11-27 08:04:25.340678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c9300 (9): Bad file descriptor 00:21:31.489 [2024-11-27 08:04:25.340698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde85b0 (9): Bad file descriptor 00:21:31.489 [2024-11-27 08:04:25.340732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.489 [2024-11-27 08:04:25.340744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.489 [2024-11-27 08:04:25.340765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.489 [2024-11-27 08:04:25.340784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.489 [2024-11-27 08:04:25.340805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.340814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1deb0 is same with the state(6) to be set 00:21:31.489 [2024-11-27 08:04:25.340836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ca1c0 (9): Bad file descriptor 00:21:31.489 [2024-11-27 08:04:25.340856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c9d30 (9): Bad file descriptor 00:21:31.489 [2024-11-27 08:04:25.342677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:31.489 [2024-11-27 08:04:25.342715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe37780 (9): Bad file descriptor 00:21:31.489 [2024-11-27 08:04:25.343302] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:31.489 [2024-11-27 08:04:25.343482] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:31.489 [2024-11-27 08:04:25.343600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.489 [2024-11-27 08:04:25.343621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf2770 with addr=10.0.0.2, port=4420 00:21:31.489 [2024-11-27 08:04:25.343632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2770 is same with the state(6) to be set 00:21:31.489 [2024-11-27 08:04:25.343697] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:31.489 [2024-11-27 08:04:25.343752] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:31.489 [2024-11-27 08:04:25.343812] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:31.489 [2024-11-27 08:04:25.343865] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:31.489 [2024-11-27 08:04:25.343955] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:31.489 [2024-11-27 08:04:25.344429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.489 [2024-11-27 08:04:25.344453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe37780 with addr=10.0.0.2, port=4420 00:21:31.489 [2024-11-27 08:04:25.344465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe37780 is same with the state(6) to be set 00:21:31.489 [2024-11-27 08:04:25.344480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2770 (9): Bad file descriptor 00:21:31.489 [2024-11-27 08:04:25.344623] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:31.489 [2024-11-27 08:04:25.344650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe37780 (9): Bad file descriptor 00:21:31.489 [2024-11-27 08:04:25.344663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:31.489 [2024-11-27 08:04:25.344674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:31.489 [2024-11-27 08:04:25.344686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:31.489 [2024-11-27 08:04:25.344697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:31.489 [2024-11-27 08:04:25.344778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:31.489 [2024-11-27 08:04:25.344791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:31.489 [2024-11-27 08:04:25.344801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:31.489 [2024-11-27 08:04:25.344810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:31.489 [2024-11-27 08:04:25.350479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1deb0 (9): Bad file descriptor 00:21:31.489 [2024-11-27 08:04:25.350626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-11-27 08:04:25.350642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.350657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-11-27 08:04:25.350667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.350679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-11-27 08:04:25.350688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.350700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-11-27 08:04:25.350709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.350720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-11-27 08:04:25.350729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-11-27 08:04:25.350745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.350755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.350767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.350776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.350787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.350797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.350808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.350817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.350828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.350837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.350848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.350857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.350871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.350880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.350891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.350900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.350911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.350920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.350931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.350939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.350956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.350966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.350976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.350985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.350997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-11-27 08:04:25.351550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-11-27 08:04:25.351560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.351922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.351931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeef340 is same with the state(6) to be set 00:21:31.491 [2024-11-27 08:04:25.353140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-11-27 08:04:25.353549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-11-27 08:04:25.353560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.353983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.353994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.354003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.354014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.354022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.354033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.354041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.354052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.354061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.354072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.354080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.354091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.354099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.354110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.354118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.354129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.354138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.354148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.354157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.354168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.354177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.354189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.354198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.354208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.354217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.354228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.354238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.354249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.354258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-11-27 08:04:25.354269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-11-27 08:04:25.354278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.354289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.354297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.354307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.354315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.354326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.354335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.354345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.354354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.354365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.354374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.354385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.354393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.354404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.354412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.354423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.354432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.354441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbce030 is same with the state(6) to be set 00:21:31.493 [2024-11-27 08:04:25.355659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.355678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.355692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.355708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.355720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.355730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.355740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.355750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.355760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.355770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.355781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.355790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.355801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.355811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.355823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.355832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.355844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.355853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.355864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.355873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.355884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.355893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.355904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.355915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.355926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.355935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.355952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.355962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.355976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.355986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.355997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.356006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.356017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.356027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.356038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.356048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.356059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.356068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.356085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.356094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.356105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.356113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.356125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.356133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.356144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.356153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.356164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.356173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.356184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.356193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.356204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.356213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.356224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.356235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.356246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.356255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.356266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.356275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.356286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.356294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-11-27 08:04:25.356305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-11-27 08:04:25.356314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.356981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.356993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcf1a0 is same with the state(6) to be set 00:21:31.494 [2024-11-27 08:04:25.358212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.358232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.358246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.358255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.358266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.358275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.358286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.358295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.358305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.358313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.358324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-11-27 08:04:25.358333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-11-27 08:04:25.358344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.358992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.358999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.359009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.359016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.359025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.359032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.359041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.359048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.359057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.359064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.359074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.359081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-11-27 08:04:25.359090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-11-27 08:04:25.359097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.359113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.359129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.359145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.359163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.359180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.359196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.359211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.359227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.359242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.359259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.359274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.359291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.359307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.359323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.359339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.359354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.359372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.359388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.359396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdce9d0 is same with the state(6) to be set 00:21:31.496 [2024-11-27 08:04:25.360398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.360413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.360423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.360431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.360440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.360447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.360457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.360464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.360473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.360481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.360490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.360499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.360509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.360516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.360525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.360533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.360543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.360551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.360561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.360568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.360580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.360588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.360596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.360604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.360613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.360620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.360629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.360636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.360645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.360653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.360661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-11-27 08:04:25.360669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-11-27 08:04:25.360678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.360989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.360997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-11-27 08:04:25.361334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-11-27 08:04:25.361343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.361350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.361359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.361367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.361376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.361384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.361393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.361400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.361409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.361421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.361430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.361437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.361446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.361453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.361462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.361469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.361477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd0f30 is same with the state(6) to be set 00:21:31.498 [2024-11-27 08:04:25.362495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.362987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.362995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.363003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.363012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-11-27 08:04:25.363019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-11-27 08:04:25.363028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.363561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.363569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acb230 is same with the state(6) to be set 00:21:31.499 [2024-11-27 08:04:25.364591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.364605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.364617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.364625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.364634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.364641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.364650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.364660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.364669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.364676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.364685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.364692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-11-27 08:04:25.364702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-11-27 08:04:25.364712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.364721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.364728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.364738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.364746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.364755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.364763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.364772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.364779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.364788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.364796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.364805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.364811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.364820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.364827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.364836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.364843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.364852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.364859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.364868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.364875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.364884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.364891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.364900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.364908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.364919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.364926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.364936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.364943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.364958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.364966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.364976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.364983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.364992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-11-27 08:04:25.365355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-11-27 08:04:25.365363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.365372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.365379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.365388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.365395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.365404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.365411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.365420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.365428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.365437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.365444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.365453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.365460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.365470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.365477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.365486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.365493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.365502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.365509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.365518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.365525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.365535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.365544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.365553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.365560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.365569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.365577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.365585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee8800 is same with the state(6) to be set 00:21:31.501 [2024-11-27 08:04:25.366550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:31.501 [2024-11-27 08:04:25.366569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:31.501 [2024-11-27 08:04:25.366580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:31.501 [2024-11-27 08:04:25.366591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:31.501 [2024-11-27 08:04:25.366662] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:31.501 [2024-11-27 08:04:25.366675] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:21:31.501 [2024-11-27 08:04:25.366690] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:21:31.501 [2024-11-27 08:04:25.366775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:31.501 [2024-11-27 08:04:25.366789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:31.501 [2024-11-27 08:04:25.366799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:31.501 [2024-11-27 08:04:25.366943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.501 [2024-11-27 08:04:25.366969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ca1c0 with addr=10.0.0.2, port=4420 00:21:31.501 [2024-11-27 08:04:25.366983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ca1c0 is same with the state(6) to be set 00:21:31.501 [2024-11-27 08:04:25.367077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.501 [2024-11-27 08:04:25.367088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9be200 with addr=10.0.0.2, port=4420 00:21:31.501 [2024-11-27 08:04:25.367096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9be200 is same with the state(6) to be set 00:21:31.501 [2024-11-27 08:04:25.367250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.501 [2024-11-27 08:04:25.367261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c9d30 with addr=10.0.0.2, port=4420 00:21:31.501 [2024-11-27 08:04:25.367269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9d30 is same with the state(6) to be set 00:21:31.501 [2024-11-27 08:04:25.367359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.501 [2024-11-27 08:04:25.367370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c9300 with addr=10.0.0.2, port=4420 00:21:31.501 [2024-11-27 08:04:25.367378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9300 is same with the state(6) to be set 00:21:31.501 [2024-11-27 08:04:25.368974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:31.501 [2024-11-27 08:04:25.368998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:31.501 [2024-11-27 08:04:25.369121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.501 [2024-11-27 08:04:25.369136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde85b0 with addr=10.0.0.2, port=4420 00:21:31.501 [2024-11-27 08:04:25.369144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde85b0 is same with the state(6) to be set 00:21:31.501 [2024-11-27 08:04:25.369320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.501 [2024-11-27 08:04:25.369332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de610 with addr=10.0.0.2, port=4420 00:21:31.501 [2024-11-27 08:04:25.369340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8de610 is same with the state(6) to be set 00:21:31.501 [2024-11-27 08:04:25.369525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.501 [2024-11-27 08:04:25.369540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe25d80 with addr=10.0.0.2, port=4420 00:21:31.501 [2024-11-27 08:04:25.369548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe25d80 is same with the state(6) to be set 00:21:31.501 [2024-11-27 08:04:25.369560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ca1c0 (9): Bad file descriptor 00:21:31.501 [2024-11-27 08:04:25.369571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9be200 (9): Bad file descriptor 00:21:31.501 [2024-11-27 08:04:25.369581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c9d30 (9): Bad file descriptor 00:21:31.501 [2024-11-27 08:04:25.369590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c9300 (9): Bad file descriptor 00:21:31.501 [2024-11-27 08:04:25.369675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.369689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.369702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.369709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.369720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.369728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.369737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.369745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.369754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.369762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-11-27 08:04:25.369770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-11-27 08:04:25.369778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.369788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.369800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.369809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.369817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.369827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.369834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.369843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.369851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.369861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.369868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.369878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.369885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.369895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.369902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.369911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.369918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.369927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.369935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.369944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.369957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.369967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.369974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.369983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.369991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-11-27 08:04:25.370449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-11-27 08:04:25.370458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-11-27 08:04:25.370771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-11-27 08:04:25.370779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee75a0 is same with the state(6) to be set 00:21:31.503 task offset: 16384 on job bdev=Nvme5n1 fails 00:21:31.503 00:21:31.503 Latency(us) 00:21:31.503 [2024-11-27T07:04:25.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.503 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.503 Job: Nvme1n1 ended in about 0.64 seconds with error 00:21:31.503 Verification LBA range: start 0x0 length 0x400 00:21:31.503 Nvme1n1 : 0.64 198.95 12.43 99.47 0.00 211424.54 17438.27 203332.56 00:21:31.503 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.503 Job: Nvme2n1 ended in about 0.65 seconds with error 00:21:31.503 Verification LBA range: start 0x0 length 0x400 00:21:31.503 Nvme2n1 : 0.65 99.09 6.19 99.09 0.00 310595.67 19831.76 246187.41 00:21:31.503 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.503 Job: Nvme3n1 ended in about 0.65 seconds with error 00:21:31.503 Verification LBA range: start 0x0 length 0x400 00:21:31.503 Nvme3n1 : 0.65 203.57 12.72 98.70 0.00 198346.42 15272.74 217921.45 00:21:31.503 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.503 Job: Nvme4n1 ended in about 0.65 seconds with error 00:21:31.503 Verification LBA range: start 0x0 length 0x400 00:21:31.503 Nvme4n1 : 0.65 205.91 12.87 98.35 0.00 191922.05 15044.79 218833.25 00:21:31.503 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.503 Job: Nvme5n1 ended in about 0.62 seconds with error 00:21:31.503 Verification LBA range: start 0x0 length 0x400 00:21:31.503 Nvme5n1 : 0.62 207.71 12.98 103.86 0.00 181122.23 3704.21 204244.37 00:21:31.503 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.503 Job: Nvme6n1 ended in about 0.65 seconds with error 00:21:31.503 Verification LBA range: start 0x0 length 0x400 00:21:31.503 Nvme6n1 : 0.65 196.07 12.25 98.03 0.00 188049.14 21085.50 210627.01 00:21:31.503 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.503 Job: Nvme7n1 ended in about 0.65 seconds with error 00:21:31.503 Verification LBA range: start 0x0 length 0x400 00:21:31.503 Nvme7n1 : 0.65 195.44 12.22 97.72 0.00 183454.50 21427.42 211538.81 00:21:31.503 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.503 Job: Nvme8n1 ended in about 0.63 seconds with error 00:21:31.503 Verification LBA range: start 0x0 length 0x400 00:21:31.503 Nvme8n1 : 0.63 202.30 12.64 101.15 0.00 170777.08 16070.57 221568.67 00:21:31.503 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.503 Job: Nvme9n1 ended in about 0.66 seconds with error 00:21:31.503 Verification LBA range: start 0x0 length 0x400 00:21:31.503 Nvme9n1 : 0.66 96.66 6.04 96.66 0.00 263139.51 42170.99 231598.53 00:21:31.503 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.503 Job: Nvme10n1 ended in about 0.66 seconds with error 00:21:31.503 Verification LBA range: start 0x0 length 0x400 00:21:31.503 Nvme10n1 : 0.66 105.04 6.56 89.81 0.00 251698.31 18578.03 246187.41 00:21:31.503 [2024-11-27T07:04:25.612Z] =================================================================================================================== 00:21:31.503 [2024-11-27T07:04:25.612Z] Total : 1710.73 106.92 982.84 0.00 208296.22 3704.21 246187.41 00:21:31.503 [2024-11-27 08:04:25.402376] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:31.503 [2024-11-27 08:04:25.402430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:31.503 [2024-11-27 08:04:25.402666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.503 [2024-11-27 08:04:25.402685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf2770 with addr=10.0.0.2, port=4420 00:21:31.503 [2024-11-27 08:04:25.402697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2770 is same with the state(6) to be set 00:21:31.503 [2024-11-27 08:04:25.402804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.503 [2024-11-27 08:04:25.402817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe37780 with addr=10.0.0.2, port=4420 00:21:31.503 [2024-11-27 08:04:25.402824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe37780 is same with the state(6) to be set 00:21:31.503 [2024-11-27 08:04:25.402838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde85b0 (9): Bad file descriptor 00:21:31.503 [2024-11-27 08:04:25.402850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8de610 (9): Bad file descriptor 00:21:31.503 [2024-11-27 08:04:25.402860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe25d80 (9): Bad file descriptor 00:21:31.503 [2024-11-27 08:04:25.402869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:31.503 [2024-11-27 08:04:25.402877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:31.503 [2024-11-27 08:04:25.402886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:31.503 [2024-11-27 08:04:25.402896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:31.503 [2024-11-27 08:04:25.402906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:31.503 [2024-11-27 08:04:25.402913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:31.503 [2024-11-27 08:04:25.402926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:31.504 [2024-11-27 08:04:25.402934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:31.504 [2024-11-27 08:04:25.402941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:31.504 [2024-11-27 08:04:25.402975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:31.504 [2024-11-27 08:04:25.402983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:31.504 [2024-11-27 08:04:25.402990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:31.504 [2024-11-27 08:04:25.402998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:31.504 [2024-11-27 08:04:25.403005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:31.504 [2024-11-27 08:04:25.403012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:31.504 [2024-11-27 08:04:25.403019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:31.504 [2024-11-27 08:04:25.403306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.504 [2024-11-27 08:04:25.403321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1deb0 with addr=10.0.0.2, port=4420 00:21:31.504 [2024-11-27 08:04:25.403329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1deb0 is same with the state(6) to be set 00:21:31.504 [2024-11-27 08:04:25.403339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2770 (9): Bad file descriptor 00:21:31.504 [2024-11-27 08:04:25.403350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe37780 (9): Bad file descriptor 00:21:31.504 [2024-11-27 08:04:25.403358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:31.504 [2024-11-27 08:04:25.403365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:31.504 [2024-11-27 08:04:25.403373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:31.504 [2024-11-27 08:04:25.403380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:31.504 [2024-11-27 08:04:25.403389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:31.504 [2024-11-27 08:04:25.403396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:31.504 [2024-11-27 08:04:25.403402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:31.504 [2024-11-27 08:04:25.403409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:31.504 [2024-11-27 08:04:25.403417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:31.504 [2024-11-27 08:04:25.403423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:31.504 [2024-11-27 08:04:25.403431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:31.504 [2024-11-27 08:04:25.403438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:31.504 [2024-11-27 08:04:25.403500] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:31.504 [2024-11-27 08:04:25.403516] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:21:31.504 [2024-11-27 08:04:25.403819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1deb0 (9): Bad file descriptor 00:21:31.504 [2024-11-27 08:04:25.403831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:31.504 [2024-11-27 08:04:25.403837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:31.504 [2024-11-27 08:04:25.403845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:31.504 [2024-11-27 08:04:25.403851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:31.504 [2024-11-27 08:04:25.403859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:31.504 [2024-11-27 08:04:25.403865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:31.504 [2024-11-27 08:04:25.403873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:31.504 [2024-11-27 08:04:25.403878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:31.504 [2024-11-27 08:04:25.403914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:31.504 [2024-11-27 08:04:25.403926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:31.504 [2024-11-27 08:04:25.403934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:31.504 [2024-11-27 08:04:25.403944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:31.504 [2024-11-27 08:04:25.403959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:31.504 [2024-11-27 08:04:25.403968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:31.504 [2024-11-27 08:04:25.403977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:31.504 [2024-11-27 08:04:25.404022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:31.504 [2024-11-27 08:04:25.404030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:31.504 [2024-11-27 08:04:25.404038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:31.504 [2024-11-27 08:04:25.404045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:31.504 [2024-11-27 08:04:25.404242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.504 [2024-11-27 08:04:25.404255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c9300 with addr=10.0.0.2, port=4420 00:21:31.504 [2024-11-27 08:04:25.404264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9300 is same with the state(6) to be set 00:21:31.504 [2024-11-27 08:04:25.404364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.504 [2024-11-27 08:04:25.404376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c9d30 with addr=10.0.0.2, port=4420 00:21:31.504 [2024-11-27 08:04:25.404383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9d30 is same with the state(6) to be set 00:21:31.504 [2024-11-27 08:04:25.404584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.504 [2024-11-27 08:04:25.404597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9be200 with addr=10.0.0.2, port=4420 00:21:31.504 [2024-11-27 08:04:25.404605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9be200 is same with the state(6) to be set 00:21:31.504 [2024-11-27 08:04:25.404699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.504 [2024-11-27 08:04:25.404711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ca1c0 with addr=10.0.0.2, port=4420 00:21:31.504 [2024-11-27 08:04:25.404719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ca1c0 is same with the state(6) to be set 00:21:31.504 [2024-11-27 08:04:25.404921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.504 [2024-11-27 08:04:25.404933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe25d80 with addr=10.0.0.2, port=4420 00:21:31.504 [2024-11-27 08:04:25.404941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe25d80 is same with the state(6) to be set 00:21:31.504 [2024-11-27 08:04:25.405079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.504 [2024-11-27 08:04:25.405091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de610 with addr=10.0.0.2, port=4420 00:21:31.504 [2024-11-27 08:04:25.405100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8de610 is same with the state(6) to be set 00:21:31.504 [2024-11-27 08:04:25.405259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.504 [2024-11-27 08:04:25.405270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde85b0 with addr=10.0.0.2, port=4420 00:21:31.504 [2024-11-27 08:04:25.405278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde85b0 is same with the state(6) to be set 00:21:31.504 [2024-11-27 08:04:25.405308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c9300 (9): Bad file descriptor 00:21:31.504 [2024-11-27 08:04:25.405319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c9d30 (9): Bad file descriptor 00:21:31.504 [2024-11-27 08:04:25.405329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9be200 (9): Bad file descriptor 00:21:31.504 [2024-11-27 08:04:25.405338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ca1c0 (9): Bad file descriptor 00:21:31.504 [2024-11-27 08:04:25.405347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe25d80 (9): Bad file descriptor 00:21:31.504 [2024-11-27 08:04:25.405356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8de610 (9): Bad file descriptor 00:21:31.504 [2024-11-27 08:04:25.405365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde85b0 (9): Bad file descriptor 00:21:31.504 [2024-11-27 08:04:25.405391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:31.504 [2024-11-27 08:04:25.405399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:31.504 [2024-11-27 08:04:25.405407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:31.504 [2024-11-27 08:04:25.405414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:31.504 [2024-11-27 08:04:25.405421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:31.504 [2024-11-27 08:04:25.405428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:31.504 [2024-11-27 08:04:25.405435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:31.504 [2024-11-27 08:04:25.405441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:31.504 [2024-11-27 08:04:25.405448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:31.504 [2024-11-27 08:04:25.405455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:31.504 [2024-11-27 08:04:25.405465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:31.504 [2024-11-27 08:04:25.405472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:31.504 [2024-11-27 08:04:25.405479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:31.505 [2024-11-27 08:04:25.405485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:31.505 [2024-11-27 08:04:25.405491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:31.505 [2024-11-27 08:04:25.405498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:31.505 [2024-11-27 08:04:25.405505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:31.505 [2024-11-27 08:04:25.405510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:31.505 [2024-11-27 08:04:25.405517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:31.505 [2024-11-27 08:04:25.405524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:31.505 [2024-11-27 08:04:25.405531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:31.505 [2024-11-27 08:04:25.405537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:31.505 [2024-11-27 08:04:25.405545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:31.505 [2024-11-27 08:04:25.405551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:31.505 [2024-11-27 08:04:25.405558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:31.505 [2024-11-27 08:04:25.405564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:31.505 [2024-11-27 08:04:25.405570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:31.505 [2024-11-27 08:04:25.405577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:31.830 08:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2508237 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2508237 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2508237 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:32.802 rmmod nvme_tcp 00:21:32.802 rmmod nvme_fabrics 00:21:32.802 rmmod nvme_keyring 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2507969 ']' 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2507969 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2507969 ']' 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2507969 00:21:32.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2507969) - No such process 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2507969 is not found' 00:21:32.802 Process with pid 2507969 is not found 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.802 08:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:35.337 00:21:35.337 real 0m7.079s 00:21:35.337 user 0m16.278s 00:21:35.337 sys 0m1.180s 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:35.337 ************************************ 00:21:35.337 END TEST nvmf_shutdown_tc3 00:21:35.337 ************************************ 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:35.337 ************************************ 00:21:35.337 START TEST nvmf_shutdown_tc4 00:21:35.337 ************************************ 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:35.337 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:35.337 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.337 08:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:35.337 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:35.337 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:35.337 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.337 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:35.338 Found net devices under 0000:86:00.0: cvl_0_0 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:35.338 Found net devices under 0000:86:00.1: cvl_0_1 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:35.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:21:35.338 00:21:35.338 --- 10.0.0.2 ping statistics --- 00:21:35.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.338 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:21:35.338 00:21:35.338 --- 10.0.0.1 ping statistics --- 00:21:35.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.338 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2509293 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2509293 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2509293 ']' 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.338 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:35.338 [2024-11-27 08:04:29.338998] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:21:35.338 [2024-11-27 08:04:29.339044] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.338 [2024-11-27 08:04:29.406148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.597 [2024-11-27 08:04:29.449700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.597 [2024-11-27 08:04:29.449736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.597 [2024-11-27 08:04:29.449746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.597 [2024-11-27 08:04:29.449752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.597 [2024-11-27 08:04:29.449758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.597 [2024-11-27 08:04:29.451351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.597 [2024-11-27 08:04:29.451435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.597 [2024-11-27 08:04:29.451546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.597 [2024-11-27 08:04:29.451547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:35.597 [2024-11-27 08:04:29.602144] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.597 08:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:35.597 Malloc1 00:21:35.857 [2024-11-27 08:04:29.710995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.857 Malloc2 00:21:35.857 Malloc3 00:21:35.857 Malloc4 00:21:35.857 Malloc5 00:21:35.857 Malloc6 00:21:35.857 Malloc7 00:21:36.117 Malloc8 00:21:36.117 Malloc9 00:21:36.117 Malloc10 00:21:36.117 08:04:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.117 08:04:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:36.117 08:04:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.117 08:04:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:36.117 08:04:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2509561 00:21:36.117 08:04:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:36.117 08:04:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:36.117 [2024-11-27 08:04:30.202534] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:41.403 08:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:41.403 08:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2509293 00:21:41.403 08:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2509293 ']' 00:21:41.403 08:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2509293 00:21:41.403 08:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:41.403 08:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.403 08:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2509293 00:21:41.403 08:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:41.403 08:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:41.403 08:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2509293' 00:21:41.403 killing process with pid 2509293 00:21:41.403 08:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2509293 00:21:41.403 08:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2509293 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 starting I/O failed: -6 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 starting I/O failed: -6 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 starting I/O failed: -6 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 starting I/O failed: -6 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 starting I/O failed: -6 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 starting I/O failed: -6 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 starting I/O failed: -6 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 [2024-11-27 08:04:35.218680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02810 is same with Write completed with error (sct=0, sc=8) 00:21:41.403 the state(6) to be set 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 [2024-11-27 08:04:35.218734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02810 is same with starting I/O failed: -6 00:21:41.403 the state(6) to be set 00:21:41.403 [2024-11-27 08:04:35.218744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02810 is same with the state(6) to be set 00:21:41.403 [2024-11-27 08:04:35.218750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02810 is same with the state(6) to be set 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 [2024-11-27 08:04:35.218757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02810 is same with the state(6) to be set 00:21:41.403 [2024-11-27 08:04:35.218763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02810 is same with the state(6) to be set 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 starting I/O failed: -6 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 starting I/O failed: -6 00:21:41.403 [2024-11-27 08:04:35.218901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 starting I/O failed: -6 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 starting I/O failed: -6 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 starting I/O failed: -6 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 starting I/O failed: -6 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 starting I/O failed: -6 00:21:41.403 [2024-11-27 08:04:35.219214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02b90 is same with the state(6) to be set 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 [2024-11-27 08:04:35.219241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02b90 is same with the state(6) to be set 00:21:41.403 [2024-11-27 08:04:35.219250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02b90 is same with the state(6) to be set 00:21:41.403 Write completed with error (sct=0, sc=8) 00:21:41.403 [2024-11-27 08:04:35.219256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02b90 is same with the state(6) to be set 00:21:41.403 starting I/O failed: -6 00:21:41.403 [2024-11-27 08:04:35.219263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02b90 is same with the state(6) to be set 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 [2024-11-27 08:04:35.219275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02b90 is same with the state(6) to be set 00:21:41.404 [2024-11-27 08:04:35.219282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02b90 is same with the state(6) to be set 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 [2024-11-27 08:04:35.219289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02b90 is same with the state(6) to be set 00:21:41.404 starting I/O failed: -6 00:21:41.404 [2024-11-27 08:04:35.219295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02b90 is same with the state(6) to be set 00:21:41.404 [2024-11-27 08:04:35.219302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02b90 is same with the state(6) to be set 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 [2024-11-27 08:04:35.219786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03060 is same with Write completed with error (sct=0, sc=8) 00:21:41.404 the state(6) to be set 00:21:41.404 starting I/O failed: -6 00:21:41.404 [2024-11-27 08:04:35.219813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03060 is same with the state(6) to be set 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 [2024-11-27 08:04:35.219821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03060 is same with the state(6) to be set 00:21:41.404 [2024-11-27 08:04:35.219829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03060 is same with the state(6) to be set 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 [2024-11-27 08:04:35.219836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03060 is same with the state(6) to be set 00:21:41.404 starting I/O failed: -6 00:21:41.404 [2024-11-27 08:04:35.219843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03060 is same with the state(6) to be set 00:21:41.404 [2024-11-27 08:04:35.219868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.404 starting I/O failed: -6 00:21:41.404 starting I/O failed: -6 00:21:41.404 [2024-11-27 08:04:35.220160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196c000 is same with the state(6) to be set 00:21:41.404 starting I/O failed: -6 00:21:41.404 [2024-11-27 08:04:35.220184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196c000 is same with the state(6) to be set 00:21:41.404 [2024-11-27 08:04:35.220193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196c000 is same with the state(6) to be set 00:21:41.404 [2024-11-27 08:04:35.220204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196c000 is same with the state(6) to be set 00:21:41.404 [2024-11-27 08:04:35.220211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196c000 is same with the state(6) to be set 00:21:41.404 [2024-11-27 08:04:35.220217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196c000 is same with the state(6) to be set 00:21:41.404 starting I/O failed: -6 00:21:41.404 [2024-11-27 08:04:35.220223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196c000 is same with the state(6) to be set 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 [2024-11-27 08:04:35.220664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197fc70 is same with starting I/O failed: -6 00:21:41.404 the state(6) to be set 00:21:41.404 [2024-11-27 08:04:35.220684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197fc70 is same with the state(6) to be set 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 [2024-11-27 08:04:35.220690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197fc70 is same with the state(6) to be set 00:21:41.404 starting I/O failed: -6 00:21:41.404 [2024-11-27 08:04:35.220698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197fc70 is same with the state(6) to be set 00:21:41.404 [2024-11-27 08:04:35.220705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197fc70 is same with the state(6) to be set 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 [2024-11-27 08:04:35.220712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197fc70 is same with the state(6) to be set 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.404 Write completed with error (sct=0, sc=8) 00:21:41.404 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 [2024-11-27 08:04:35.221080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:41.405 [2024-11-27 08:04:35.221121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980140 is same with the state(6) to be set 00:21:41.405 [2024-11-27 08:04:35.221139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980140 is same with the state(6) to be set 00:21:41.405 [2024-11-27 08:04:35.221146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980140 is same with the state(6) to be set 00:21:41.405 [2024-11-27 08:04:35.221152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980140 is same with the state(6) to be set 00:21:41.405 [2024-11-27 08:04:35.221158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980140 is same with the state(6) to be set 00:21:41.405 [2024-11-27 08:04:35.221165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980140 is same with the state(6) to be set 00:21:41.405 [2024-11-27 08:04:35.221172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980140 is same with the state(6) to be set 00:21:41.405 [2024-11-27 08:04:35.221179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980140 is same with Write completed with error (sct=0, sc=8) 00:21:41.405 the state(6) to be set 00:21:41.405 starting I/O failed: -6 00:21:41.405 [2024-11-27 08:04:35.221187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980140 is same with the state(6) to be set 00:21:41.405 [2024-11-27 08:04:35.221194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980140 is same with the state(6) to be set 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 [2024-11-27 08:04:35.221462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980610 is same with the state(6) to be set 00:21:41.405 [2024-11-27 08:04:35.221475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980610 is same with the state(6) to be set 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 [2024-11-27 08:04:35.221483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980610 is same with the state(6) to be set 00:21:41.405 starting I/O failed: -6 00:21:41.405 [2024-11-27 08:04:35.221489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980610 is same with the state(6) to be set 00:21:41.405 [2024-11-27 08:04:35.221496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980610 is same with the state(6) to be set 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 [2024-11-27 08:04:35.221502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980610 is same with the state(6) to be set 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.405 starting I/O failed: -6 00:21:41.405 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 [2024-11-27 08:04:35.222646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b6b0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.222662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b6b0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.222668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b6b0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.222676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b6b0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.222683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b6b0 is same with the state(6) to be set 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 [2024-11-27 08:04:35.222690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b6b0 is same with the state(6) to be set 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 [2024-11-27 08:04:35.222766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:41.406 NVMe io qpair process completion error 00:21:41.406 [2024-11-27 08:04:35.223016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178bba0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.223031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178bba0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.223038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178bba0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.223045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178bba0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.223056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178bba0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.223062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178bba0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.223069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178bba0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.223075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178bba0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.223081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178bba0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.223088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178bba0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.223852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c090 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.223874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c090 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.223882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c090 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.223890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c090 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.223896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c090 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.223903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c090 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.224326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b1e0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.224350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b1e0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.224359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b1e0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.224367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b1e0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.224373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b1e0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.224381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b1e0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.224387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b1e0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.224393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b1e0 is same with the state(6) to be set 00:21:41.406 [2024-11-27 08:04:35.224400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b1e0 is same with the state(6) to be set 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 [2024-11-27 08:04:35.226115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.406 starting I/O failed: -6 00:21:41.406 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 [2024-11-27 08:04:35.227010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 [2024-11-27 08:04:35.228031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.407 starting I/O failed: -6 00:21:41.407 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 [2024-11-27 08:04:35.229167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178db50 is same with the state(6) to be set 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 [2024-11-27 08:04:35.229190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178db50 is same with the state(6) to be set 00:21:41.408 [2024-11-27 08:04:35.229198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178db50 is same with the state(6) to be set 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 [2024-11-27 08:04:35.229205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178db50 is same with the state(6) to be set 00:21:41.408 [2024-11-27 08:04:35.229212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178db50 is same with the state(6) to be set 00:21:41.408 [2024-11-27 08:04:35.229219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178db50 is same with Write completed with error (sct=0, sc=8) 00:21:41.408 the state(6) to be set 00:21:41.408 starting I/O failed: -6 00:21:41.408 [2024-11-27 08:04:35.229227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178db50 is same with the state(6) to be set 00:21:41.408 [2024-11-27 08:04:35.229234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178db50 is same with the state(6) to be set 00:21:41.408 [2024-11-27 08:04:35.229240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178db50 is same with the state(6) to be set 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 [2024-11-27 08:04:35.229246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178db50 is same with starting I/O failed: -6 00:21:41.408 the state(6) to be set 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 [2024-11-27 08:04:35.229461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:41.408 NVMe io qpair process completion error 00:21:41.408 [2024-11-27 08:04:35.229548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178e020 is same with the state(6) to be set 00:21:41.408 [2024-11-27 08:04:35.229569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178e020 is same with the state(6) to be set 00:21:41.408 [2024-11-27 08:04:35.229577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178e020 is same with the state(6) to be set 00:21:41.408 [2024-11-27 08:04:35.229583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178e020 is same with the state(6) to be set 00:21:41.408 [2024-11-27 08:04:35.229591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178e020 is same with the state(6) to be set 00:21:41.408 [2024-11-27 08:04:35.229598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178e020 is same with the state(6) to be set 00:21:41.408 [2024-11-27 08:04:35.229604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178e020 is same with the state(6) to be set 00:21:41.408 [2024-11-27 08:04:35.229610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178e020 is same with the state(6) to be set 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 Write completed with error (sct=0, sc=8) 00:21:41.408 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 [2024-11-27 08:04:35.229999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178e510 is same with the state(6) to be set 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 [2024-11-27 08:04:35.230018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178e510 is same with the state(6) to be set 00:21:41.409 [2024-11-27 08:04:35.230025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178e510 is same with the state(6) to be set 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 [2024-11-27 08:04:35.230032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178e510 is same with the state(6) to be set 00:21:41.409 starting I/O failed: -6 00:21:41.409 [2024-11-27 08:04:35.230039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178e510 is same with the state(6) to be set 00:21:41.409 [2024-11-27 08:04:35.230046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178e510 is same with the state(6) to be set 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 [2024-11-27 08:04:35.230324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178d680 is same with the state(6) to be set 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 [2024-11-27 08:04:35.230344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178d680 is same with the state(6) to be set 00:21:41.409 [2024-11-27 08:04:35.230356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178d680 is same with Write completed with error (sct=0, sc=8) 00:21:41.409 the state(6) to be set 00:21:41.409 [2024-11-27 08:04:35.230364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178d680 is same with the state(6) to be set 00:21:41.409 [2024-11-27 08:04:35.230370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178d680 is same with the state(6) to be set 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 [2024-11-27 08:04:35.230376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178d680 is same with the state(6) to be set 00:21:41.409 [2024-11-27 08:04:35.230383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178d680 is same with the state(6) to be set 00:21:41.409 [2024-11-27 08:04:35.230389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178d680 is same with the state(6) to be set 00:21:41.409 [2024-11-27 08:04:35.230397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.409 starting I/O failed: -6 00:21:41.409 Write completed with error (sct=0, sc=8) 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 [2024-11-27 08:04:35.231275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 [2024-11-27 08:04:35.232318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.410 starting I/O failed: -6 00:21:41.410 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 [2024-11-27 08:04:35.234306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.411 NVMe io qpair process completion error 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 [2024-11-27 08:04:35.235281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:41.411 starting I/O failed: -6 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.411 Write completed with error (sct=0, sc=8) 00:21:41.411 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 [2024-11-27 08:04:35.236200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 [2024-11-27 08:04:35.237282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.412 starting I/O failed: -6 00:21:41.412 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 [2024-11-27 08:04:35.239361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:41.413 NVMe io qpair process completion error 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 [2024-11-27 08:04:35.240392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:41.413 starting I/O failed: -6 00:21:41.413 starting I/O failed: -6 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.413 starting I/O failed: -6 00:21:41.413 Write completed with error (sct=0, sc=8) 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 [2024-11-27 08:04:35.241304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 [2024-11-27 08:04:35.242398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.414 Write completed with error (sct=0, sc=8) 00:21:41.414 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 [2024-11-27 08:04:35.250920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.415 NVMe io qpair process completion error 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 [2024-11-27 08:04:35.251890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:41.415 starting I/O failed: -6 00:21:41.415 starting I/O failed: -6 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 starting I/O failed: -6 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.415 Write completed with error (sct=0, sc=8) 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 [2024-11-27 08:04:35.252734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 [2024-11-27 08:04:35.253865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.416 Write completed with error (sct=0, sc=8) 00:21:41.416 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 [2024-11-27 08:04:35.255828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.417 NVMe io qpair process completion error 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 [2024-11-27 08:04:35.256798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.417 starting I/O failed: -6 00:21:41.417 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 [2024-11-27 08:04:35.257685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:41.418 starting I/O failed: -6 00:21:41.418 starting I/O failed: -6 00:21:41.418 starting I/O failed: -6 00:21:41.418 starting I/O failed: -6 00:21:41.418 starting I/O failed: -6 00:21:41.418 starting I/O failed: -6 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 [2024-11-27 08:04:35.258935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.418 starting I/O failed: -6 00:21:41.418 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 [2024-11-27 08:04:35.260739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.419 NVMe io qpair process completion error 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 [2024-11-27 08:04:35.261773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.419 starting I/O failed: -6 00:21:41.419 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 [2024-11-27 08:04:35.262684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.420 Write completed with error (sct=0, sc=8) 00:21:41.420 starting I/O failed: -6 00:21:41.421 [2024-11-27 08:04:35.263684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 [2024-11-27 08:04:35.268502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:41.421 NVMe io qpair process completion error 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 Write completed with error (sct=0, sc=8) 00:21:41.421 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 [2024-11-27 08:04:35.269496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 [2024-11-27 08:04:35.270410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 Write completed with error (sct=0, sc=8) 00:21:41.422 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 [2024-11-27 08:04:35.271418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.423 [2024-11-27 08:04:35.275824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:41.423 NVMe io qpair process completion error 00:21:41.423 Write completed with error (sct=0, sc=8) 00:21:41.423 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 [2024-11-27 08:04:35.276855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 [2024-11-27 08:04:35.277767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.424 starting I/O failed: -6 00:21:41.424 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 [2024-11-27 08:04:35.278771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 Write completed with error (sct=0, sc=8) 00:21:41.425 starting I/O failed: -6 00:21:41.425 [2024-11-27 08:04:35.281164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:41.425 NVMe io qpair process completion error 00:21:41.425 Initializing NVMe Controllers 00:21:41.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:41.425 Controller IO queue size 128, less than required. 00:21:41.425 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:41.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:41.425 Controller IO queue size 128, less than required. 00:21:41.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:41.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:41.426 Controller IO queue size 128, less than required. 00:21:41.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:41.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:41.426 Controller IO queue size 128, less than required. 00:21:41.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:41.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:41.426 Controller IO queue size 128, less than required. 00:21:41.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:41.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:41.426 Controller IO queue size 128, less than required. 00:21:41.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:41.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:41.426 Controller IO queue size 128, less than required. 00:21:41.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:41.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:41.426 Controller IO queue size 128, less than required. 00:21:41.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:41.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:41.426 Controller IO queue size 128, less than required. 00:21:41.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:41.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:41.426 Controller IO queue size 128, less than required. 00:21:41.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:41.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:41.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:41.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:41.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:41.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:41.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:41.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:41.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:41.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:41.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:41.426 Initialization complete. Launching workers. 00:21:41.426 ======================================================== 00:21:41.426 Latency(us) 00:21:41.426 Device Information : IOPS MiB/s Average min max 00:21:41.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2130.67 91.55 60080.63 721.81 118026.88 00:21:41.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2124.15 91.27 60318.78 743.64 123348.91 00:21:41.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2116.55 90.95 59824.79 844.48 105328.99 00:21:41.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2128.06 91.44 60205.85 940.37 126096.06 00:21:41.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2107.43 90.55 60078.17 938.90 100486.54 00:21:41.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2159.33 92.78 58641.28 878.95 99017.54 00:21:41.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2159.55 92.79 58652.20 666.34 98006.48 00:21:41.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2179.96 93.67 58126.64 702.09 98072.95 00:21:41.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2207.76 94.86 57487.35 633.25 97012.68 00:21:41.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2164.98 93.03 58638.88 787.27 115470.89 00:21:41.426 ======================================================== 00:21:41.426 Total : 21478.44 922.90 59192.60 633.25 126096.06 00:21:41.426 00:21:41.426 [2024-11-27 08:04:35.284161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113dae0 is same with the state(6) to be set 00:21:41.426 [2024-11-27 08:04:35.284211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113b890 is same with the state(6) to be set 00:21:41.426 [2024-11-27 08:04:35.284243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d720 is same with the state(6) to be set 00:21:41.426 [2024-11-27 08:04:35.284276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113c740 is same with the state(6) to be set 00:21:41.426 [2024-11-27 08:04:35.284307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d900 is same with the state(6) to be set 00:21:41.426 [2024-11-27 08:04:35.284336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113bef0 is same with the state(6) to be set 00:21:41.426 [2024-11-27 08:04:35.284364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113ca70 is same with the state(6) to be set 00:21:41.426 [2024-11-27 08:04:35.284393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113c410 is same with the state(6) to be set 00:21:41.426 [2024-11-27 08:04:35.284421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113bbc0 is same with the state(6) to be set 00:21:41.426 [2024-11-27 08:04:35.284450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113b560 is same with the state(6) to be set 00:21:41.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:41.686 08:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2509561 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2509561 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2509561 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:42.624 rmmod nvme_tcp 00:21:42.624 rmmod nvme_fabrics 00:21:42.624 rmmod nvme_keyring 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2509293 ']' 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2509293 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2509293 ']' 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2509293 00:21:42.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2509293) - No such process 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2509293 is not found' 00:21:42.624 Process with pid 2509293 is not found 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.624 08:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.159 08:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:45.159 00:21:45.159 real 0m9.783s 00:21:45.159 user 0m25.168s 00:21:45.159 sys 0m4.964s 00:21:45.159 08:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.159 08:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:45.159 ************************************ 00:21:45.159 END TEST nvmf_shutdown_tc4 00:21:45.159 ************************************ 00:21:45.159 08:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:45.159 00:21:45.159 real 0m39.303s 00:21:45.159 user 1m36.325s 00:21:45.159 sys 0m13.211s 00:21:45.159 08:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.159 08:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:45.159 ************************************ 00:21:45.159 END TEST nvmf_shutdown 00:21:45.159 ************************************ 00:21:45.160 08:04:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:45.160 08:04:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:45.160 08:04:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:45.160 08:04:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:45.160 ************************************ 00:21:45.160 START TEST nvmf_nsid 00:21:45.160 ************************************ 00:21:45.160 08:04:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:45.160 * Looking for test storage... 00:21:45.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:45.160 08:04:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:45.160 08:04:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:45.160 08:04:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:45.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.160 --rc genhtml_branch_coverage=1 00:21:45.160 --rc genhtml_function_coverage=1 00:21:45.160 --rc genhtml_legend=1 00:21:45.160 --rc geninfo_all_blocks=1 00:21:45.160 --rc geninfo_unexecuted_blocks=1 00:21:45.160 00:21:45.160 ' 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:45.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.160 --rc genhtml_branch_coverage=1 00:21:45.160 --rc genhtml_function_coverage=1 00:21:45.160 --rc genhtml_legend=1 00:21:45.160 --rc geninfo_all_blocks=1 00:21:45.160 --rc geninfo_unexecuted_blocks=1 00:21:45.160 00:21:45.160 ' 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:45.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.160 --rc genhtml_branch_coverage=1 00:21:45.160 --rc genhtml_function_coverage=1 00:21:45.160 --rc genhtml_legend=1 00:21:45.160 --rc geninfo_all_blocks=1 00:21:45.160 --rc geninfo_unexecuted_blocks=1 00:21:45.160 00:21:45.160 ' 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:45.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.160 --rc genhtml_branch_coverage=1 00:21:45.160 --rc genhtml_function_coverage=1 00:21:45.160 --rc genhtml_legend=1 00:21:45.160 --rc geninfo_all_blocks=1 00:21:45.160 --rc geninfo_unexecuted_blocks=1 00:21:45.160 00:21:45.160 ' 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.160 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:45.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:45.161 08:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:50.434 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:50.434 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:50.434 Found net devices under 0000:86:00.0: cvl_0_0 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:50.434 Found net devices under 0000:86:00.1: cvl_0_1 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:50.434 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:50.435 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:50.435 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:50.435 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:50.435 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:50.435 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:50.435 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:50.435 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:50.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:21:50.435 00:21:50.435 --- 10.0.0.2 ping statistics --- 00:21:50.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.435 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:21:50.435 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:50.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:21:50.435 00:21:50.435 --- 10.0.0.1 ping statistics --- 00:21:50.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.435 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:21:50.435 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.435 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:50.435 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:50.435 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:50.435 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:50.435 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:50.435 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:50.435 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:50.435 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:50.694 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:50.694 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:50.694 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.694 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:50.694 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2514014 00:21:50.694 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2514014 00:21:50.694 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2514014 ']' 00:21:50.694 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.694 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.694 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.694 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:50.694 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.694 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:50.694 [2024-11-27 08:04:44.613217] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:21:50.694 [2024-11-27 08:04:44.613263] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.694 [2024-11-27 08:04:44.679266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.694 [2024-11-27 08:04:44.720463] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.694 [2024-11-27 08:04:44.720501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.694 [2024-11-27 08:04:44.720508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.694 [2024-11-27 08:04:44.720514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.694 [2024-11-27 08:04:44.720519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.694 [2024-11-27 08:04:44.721074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2514036 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=dc8ae30f-cf1c-4ba6-8bdd-6f4107706bfe 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=5a073f12-ac5a-4b6a-a994-c6e0478dd608 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=a2967a41-8ae9-4064-9123-5d1abfcd7bc7 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.953 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:50.953 null0 00:21:50.953 null1 00:21:50.953 [2024-11-27 08:04:44.905539] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:21:50.954 [2024-11-27 08:04:44.905582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2514036 ] 00:21:50.954 null2 00:21:50.954 [2024-11-27 08:04:44.910104] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.954 [2024-11-27 08:04:44.934304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.954 [2024-11-27 08:04:44.967429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.954 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.954 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2514036 /var/tmp/tgt2.sock 00:21:50.954 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2514036 ']' 00:21:50.954 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:50.954 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.954 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:50.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:50.954 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.954 08:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:50.954 [2024-11-27 08:04:45.015723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.212 08:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.212 08:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:51.212 08:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:51.470 [2024-11-27 08:04:45.547482] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.471 [2024-11-27 08:04:45.563593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:51.729 nvme0n1 nvme0n2 00:21:51.729 nvme1n1 00:21:51.729 08:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:51.729 08:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:51.729 08:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.665 08:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:52.665 08:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:52.665 08:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:52.665 08:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:52.665 08:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:52.665 08:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:52.665 08:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:52.665 08:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:52.665 08:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:52.665 08:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:52.665 08:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:52.665 08:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:52.665 08:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:53.601 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:53.601 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:53.601 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:53.601 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:53.601 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:53.601 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid dc8ae30f-cf1c-4ba6-8bdd-6f4107706bfe 00:21:53.601 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:53.601 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:53.601 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=dc8ae30fcf1c4ba68bdd6f4107706bfe 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DC8AE30FCF1C4BA68BDD6F4107706BFE 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ DC8AE30FCF1C4BA68BDD6F4107706BFE == \D\C\8\A\E\3\0\F\C\F\1\C\4\B\A\6\8\B\D\D\6\F\4\1\0\7\7\0\6\B\F\E ]] 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 5a073f12-ac5a-4b6a-a994-c6e0478dd608 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5a073f12ac5a4b6aa994c6e0478dd608 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5A073F12AC5A4B6AA994C6E0478DD608 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 5A073F12AC5A4B6AA994C6E0478DD608 == \5\A\0\7\3\F\1\2\A\C\5\A\4\B\6\A\A\9\9\4\C\6\E\0\4\7\8\D\D\6\0\8 ]] 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid a2967a41-8ae9-4064-9123-5d1abfcd7bc7 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a2967a418ae9406491235d1abfcd7bc7 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A2967A418AE9406491235D1ABFCD7BC7 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ A2967A418AE9406491235D1ABFCD7BC7 == \A\2\9\6\7\A\4\1\8\A\E\9\4\0\6\4\9\1\2\3\5\D\1\A\B\F\C\D\7\B\C\7 ]] 00:21:53.860 08:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:54.119 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:54.119 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:54.119 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2514036 00:21:54.119 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2514036 ']' 00:21:54.119 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2514036 00:21:54.119 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:54.119 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.119 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2514036 00:21:54.119 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:54.119 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:54.119 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2514036' 00:21:54.119 killing process with pid 2514036 00:21:54.119 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2514036 00:21:54.119 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2514036 00:21:54.378 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:54.378 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:54.378 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:54.378 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:54.378 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:54.378 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:54.378 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:54.378 rmmod nvme_tcp 00:21:54.378 rmmod nvme_fabrics 00:21:54.637 rmmod nvme_keyring 00:21:54.637 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:54.637 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:54.637 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:54.637 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2514014 ']' 00:21:54.637 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2514014 00:21:54.637 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2514014 ']' 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2514014 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2514014 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2514014' 00:21:54.638 killing process with pid 2514014 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2514014 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2514014 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.638 08:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.176 08:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:57.176 00:21:57.176 real 0m11.914s 00:21:57.176 user 0m9.524s 00:21:57.176 sys 0m5.116s 00:21:57.176 08:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:57.176 08:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:57.176 ************************************ 00:21:57.176 END TEST nvmf_nsid 00:21:57.176 ************************************ 00:21:57.176 08:04:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:57.176 00:21:57.176 real 11m46.151s 00:21:57.176 user 25m36.498s 00:21:57.176 sys 3m33.509s 00:21:57.176 08:04:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:57.176 08:04:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:57.176 ************************************ 00:21:57.176 END TEST nvmf_target_extra 00:21:57.176 ************************************ 00:21:57.176 08:04:50 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:57.176 08:04:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:57.176 08:04:50 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.176 08:04:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:57.176 ************************************ 00:21:57.176 START TEST nvmf_host 00:21:57.176 ************************************ 00:21:57.176 08:04:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:57.176 * Looking for test storage... 00:21:57.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:57.176 08:04:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:57.176 08:04:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:57.176 08:04:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:57.176 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:57.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.177 --rc genhtml_branch_coverage=1 00:21:57.177 --rc genhtml_function_coverage=1 00:21:57.177 --rc genhtml_legend=1 00:21:57.177 --rc geninfo_all_blocks=1 00:21:57.177 --rc geninfo_unexecuted_blocks=1 00:21:57.177 00:21:57.177 ' 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:57.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.177 --rc genhtml_branch_coverage=1 00:21:57.177 --rc genhtml_function_coverage=1 00:21:57.177 --rc genhtml_legend=1 00:21:57.177 --rc geninfo_all_blocks=1 00:21:57.177 --rc geninfo_unexecuted_blocks=1 00:21:57.177 00:21:57.177 ' 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:57.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.177 --rc genhtml_branch_coverage=1 00:21:57.177 --rc genhtml_function_coverage=1 00:21:57.177 --rc genhtml_legend=1 00:21:57.177 --rc geninfo_all_blocks=1 00:21:57.177 --rc geninfo_unexecuted_blocks=1 00:21:57.177 00:21:57.177 ' 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:57.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.177 --rc genhtml_branch_coverage=1 00:21:57.177 --rc genhtml_function_coverage=1 00:21:57.177 --rc genhtml_legend=1 00:21:57.177 --rc geninfo_all_blocks=1 00:21:57.177 --rc geninfo_unexecuted_blocks=1 00:21:57.177 00:21:57.177 ' 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:57.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.177 ************************************ 00:21:57.177 START TEST nvmf_multicontroller 00:21:57.177 ************************************ 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:57.177 * Looking for test storage... 00:21:57.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:21:57.177 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:57.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.437 --rc genhtml_branch_coverage=1 00:21:57.437 --rc genhtml_function_coverage=1 00:21:57.437 --rc genhtml_legend=1 00:21:57.437 --rc geninfo_all_blocks=1 00:21:57.437 --rc geninfo_unexecuted_blocks=1 00:21:57.437 00:21:57.437 ' 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:57.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.437 --rc genhtml_branch_coverage=1 00:21:57.437 --rc genhtml_function_coverage=1 00:21:57.437 --rc genhtml_legend=1 00:21:57.437 --rc geninfo_all_blocks=1 00:21:57.437 --rc geninfo_unexecuted_blocks=1 00:21:57.437 00:21:57.437 ' 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:57.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.437 --rc genhtml_branch_coverage=1 00:21:57.437 --rc genhtml_function_coverage=1 00:21:57.437 --rc genhtml_legend=1 00:21:57.437 --rc geninfo_all_blocks=1 00:21:57.437 --rc geninfo_unexecuted_blocks=1 00:21:57.437 00:21:57.437 ' 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:57.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.437 --rc genhtml_branch_coverage=1 00:21:57.437 --rc genhtml_function_coverage=1 00:21:57.437 --rc genhtml_legend=1 00:21:57.437 --rc geninfo_all_blocks=1 00:21:57.437 --rc geninfo_unexecuted_blocks=1 00:21:57.437 00:21:57.437 ' 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:57.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:57.437 08:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:02.709 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:02.709 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:02.709 Found net devices under 0000:86:00.0: cvl_0_0 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:02.709 Found net devices under 0000:86:00.1: cvl_0_1 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.709 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:02.710 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:02.710 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.710 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.710 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:02.710 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:02.710 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.710 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.968 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.968 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.968 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:02.968 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.968 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.968 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.968 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:02.968 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:02.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:22:02.968 00:22:02.968 --- 10.0.0.2 ping statistics --- 00:22:02.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.968 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:22:02.968 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:22:02.968 00:22:02.968 --- 10.0.0.1 ping statistics --- 00:22:02.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.968 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:22:02.968 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.968 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:02.968 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:02.968 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.968 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:02.968 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:02.968 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.968 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:02.968 08:04:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:02.968 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:02.968 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:02.968 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.968 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:02.968 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2518348 00:22:02.968 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2518348 00:22:02.969 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:02.969 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2518348 ']' 00:22:02.969 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.969 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.969 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.969 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.969 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.228 [2024-11-27 08:04:57.078442] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:22:03.228 [2024-11-27 08:04:57.078487] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.228 [2024-11-27 08:04:57.145184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:03.228 [2024-11-27 08:04:57.188498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.228 [2024-11-27 08:04:57.188535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.228 [2024-11-27 08:04:57.188543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.228 [2024-11-27 08:04:57.188549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.228 [2024-11-27 08:04:57.188554] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.228 [2024-11-27 08:04:57.190009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.228 [2024-11-27 08:04:57.190099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:03.228 [2024-11-27 08:04:57.190102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.228 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.228 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:03.228 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:03.228 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:03.228 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.228 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.228 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:03.228 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.228 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.487 [2024-11-27 08:04:57.339875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.487 Malloc0 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.487 [2024-11-27 08:04:57.398751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.487 [2024-11-27 08:04:57.406660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.487 Malloc1 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.487 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.488 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:03.488 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.488 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.488 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.488 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:03.488 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.488 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.488 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.488 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2518375 00:22:03.488 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:03.488 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:03.488 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2518375 /var/tmp/bdevperf.sock 00:22:03.488 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2518375 ']' 00:22:03.488 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.488 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.488 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.488 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.488 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.747 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.747 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:03.747 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:03.747 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.747 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.006 NVMe0n1 00:22:04.006 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.006 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:04.006 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:04.006 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.006 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.006 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.006 1 00:22:04.006 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:04.006 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:04.006 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:04.006 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:04.006 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.006 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:04.006 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.006 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:04.006 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.006 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.006 request: 00:22:04.006 { 00:22:04.006 "name": "NVMe0", 00:22:04.006 "trtype": "tcp", 00:22:04.006 "traddr": "10.0.0.2", 00:22:04.006 "adrfam": "ipv4", 00:22:04.006 "trsvcid": "4420", 00:22:04.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.006 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:04.006 "hostaddr": "10.0.0.1", 00:22:04.006 "prchk_reftag": false, 00:22:04.006 "prchk_guard": false, 00:22:04.006 "hdgst": false, 00:22:04.006 "ddgst": false, 00:22:04.006 "allow_unrecognized_csi": false, 00:22:04.006 "method": "bdev_nvme_attach_controller", 00:22:04.006 "req_id": 1 00:22:04.006 } 00:22:04.006 Got JSON-RPC error response 00:22:04.006 response: 00:22:04.006 { 00:22:04.006 "code": -114, 00:22:04.006 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:04.006 } 00:22:04.007 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:04.007 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:04.007 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:04.007 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:04.007 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:04.007 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:04.007 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:04.007 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:04.007 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:04.007 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.007 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:04.007 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.007 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:04.007 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.007 08:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.007 request: 00:22:04.007 { 00:22:04.007 "name": "NVMe0", 00:22:04.007 "trtype": "tcp", 00:22:04.007 "traddr": "10.0.0.2", 00:22:04.007 "adrfam": "ipv4", 00:22:04.007 "trsvcid": "4420", 00:22:04.007 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:04.007 "hostaddr": "10.0.0.1", 00:22:04.007 "prchk_reftag": false, 00:22:04.007 "prchk_guard": false, 00:22:04.007 "hdgst": false, 00:22:04.007 "ddgst": false, 00:22:04.007 "allow_unrecognized_csi": false, 00:22:04.007 "method": "bdev_nvme_attach_controller", 00:22:04.007 "req_id": 1 00:22:04.007 } 00:22:04.007 Got JSON-RPC error response 00:22:04.007 response: 00:22:04.007 { 00:22:04.007 "code": -114, 00:22:04.007 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:04.007 } 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.007 request: 00:22:04.007 { 00:22:04.007 "name": "NVMe0", 00:22:04.007 "trtype": "tcp", 00:22:04.007 "traddr": "10.0.0.2", 00:22:04.007 "adrfam": "ipv4", 00:22:04.007 "trsvcid": "4420", 00:22:04.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.007 "hostaddr": "10.0.0.1", 00:22:04.007 "prchk_reftag": false, 00:22:04.007 "prchk_guard": false, 00:22:04.007 "hdgst": false, 00:22:04.007 "ddgst": false, 00:22:04.007 "multipath": "disable", 00:22:04.007 "allow_unrecognized_csi": false, 00:22:04.007 "method": "bdev_nvme_attach_controller", 00:22:04.007 "req_id": 1 00:22:04.007 } 00:22:04.007 Got JSON-RPC error response 00:22:04.007 response: 00:22:04.007 { 00:22:04.007 "code": -114, 00:22:04.007 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:04.007 } 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.007 request: 00:22:04.007 { 00:22:04.007 "name": "NVMe0", 00:22:04.007 "trtype": "tcp", 00:22:04.007 "traddr": "10.0.0.2", 00:22:04.007 "adrfam": "ipv4", 00:22:04.007 "trsvcid": "4420", 00:22:04.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.007 "hostaddr": "10.0.0.1", 00:22:04.007 "prchk_reftag": false, 00:22:04.007 "prchk_guard": false, 00:22:04.007 "hdgst": false, 00:22:04.007 "ddgst": false, 00:22:04.007 "multipath": "failover", 00:22:04.007 "allow_unrecognized_csi": false, 00:22:04.007 "method": "bdev_nvme_attach_controller", 00:22:04.007 "req_id": 1 00:22:04.007 } 00:22:04.007 Got JSON-RPC error response 00:22:04.007 response: 00:22:04.007 { 00:22:04.007 "code": -114, 00:22:04.007 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:04.007 } 00:22:04.007 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:04.008 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:04.008 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:04.008 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:04.008 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:04.008 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:04.008 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.008 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.266 NVMe0n1 00:22:04.266 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.266 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:04.266 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.266 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.266 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.266 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:04.266 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.266 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.525 00:22:04.525 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.525 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:04.525 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:04.525 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.525 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.525 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.525 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:04.525 08:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:05.464 { 00:22:05.464 "results": [ 00:22:05.464 { 00:22:05.464 "job": "NVMe0n1", 00:22:05.464 "core_mask": "0x1", 00:22:05.464 "workload": "write", 00:22:05.464 "status": "finished", 00:22:05.464 "queue_depth": 128, 00:22:05.464 "io_size": 4096, 00:22:05.464 "runtime": 1.008071, 00:22:05.464 "iops": 24316.739594730927, 00:22:05.464 "mibps": 94.98726404191768, 00:22:05.464 "io_failed": 0, 00:22:05.464 "io_timeout": 0, 00:22:05.464 "avg_latency_us": 5256.885758506134, 00:22:05.464 "min_latency_us": 4986.434782608696, 00:22:05.464 "max_latency_us": 12081.419130434782 00:22:05.464 } 00:22:05.464 ], 00:22:05.464 "core_count": 1 00:22:05.464 } 00:22:05.464 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:05.464 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.464 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:05.464 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.464 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:05.464 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2518375 00:22:05.464 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2518375 ']' 00:22:05.464 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2518375 00:22:05.464 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2518375 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2518375' 00:22:05.723 killing process with pid 2518375 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2518375 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2518375 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:05.723 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:05.723 [2024-11-27 08:04:57.510608] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:22:05.723 [2024-11-27 08:04:57.510653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518375 ] 00:22:05.723 [2024-11-27 08:04:57.573401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.723 [2024-11-27 08:04:57.617955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.723 [2024-11-27 08:04:58.405813] bdev.c:4926:bdev_name_add: *ERROR*: Bdev name fb3648e5-4d8f-4cda-9c69-b7a749fc6d18 already exists 00:22:05.723 [2024-11-27 08:04:58.405845] bdev.c:8146:bdev_register: *ERROR*: Unable to add uuid:fb3648e5-4d8f-4cda-9c69-b7a749fc6d18 alias for bdev NVMe1n1 00:22:05.723 [2024-11-27 08:04:58.405853] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:05.723 Running I/O for 1 seconds... 00:22:05.723 24258.00 IOPS, 94.76 MiB/s 00:22:05.723 Latency(us) 00:22:05.723 [2024-11-27T07:04:59.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.723 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:05.723 NVMe0n1 : 1.01 24316.74 94.99 0.00 0.00 5256.89 4986.43 12081.42 00:22:05.723 [2024-11-27T07:04:59.832Z] =================================================================================================================== 00:22:05.723 [2024-11-27T07:04:59.832Z] Total : 24316.74 94.99 0.00 0.00 5256.89 4986.43 12081.42 00:22:05.723 Received shutdown signal, test time was about 1.000000 seconds 00:22:05.723 00:22:05.723 Latency(us) 00:22:05.723 [2024-11-27T07:04:59.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.723 [2024-11-27T07:04:59.832Z] =================================================================================================================== 00:22:05.723 [2024-11-27T07:04:59.832Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:05.723 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:05.723 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:05.723 rmmod nvme_tcp 00:22:05.982 rmmod nvme_fabrics 00:22:05.982 rmmod nvme_keyring 00:22:05.982 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:05.982 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:05.982 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:05.982 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2518348 ']' 00:22:05.982 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2518348 00:22:05.982 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2518348 ']' 00:22:05.982 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2518348 00:22:05.982 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:05.982 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.982 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2518348 00:22:05.982 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:05.982 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:05.982 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2518348' 00:22:05.982 killing process with pid 2518348 00:22:05.982 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2518348 00:22:05.982 08:04:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2518348 00:22:06.241 08:05:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:06.242 08:05:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:06.242 08:05:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:06.242 08:05:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:06.242 08:05:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:06.242 08:05:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:06.242 08:05:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:06.242 08:05:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:06.242 08:05:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:06.242 08:05:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.242 08:05:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.242 08:05:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.146 08:05:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:08.146 00:22:08.146 real 0m11.074s 00:22:08.146 user 0m12.832s 00:22:08.146 sys 0m5.031s 00:22:08.146 08:05:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:08.146 08:05:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.146 ************************************ 00:22:08.146 END TEST nvmf_multicontroller 00:22:08.146 ************************************ 00:22:08.147 08:05:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:08.147 08:05:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:08.147 08:05:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:08.147 08:05:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.406 ************************************ 00:22:08.406 START TEST nvmf_aer 00:22:08.406 ************************************ 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:08.406 * Looking for test storage... 00:22:08.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:08.406 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:08.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.407 --rc genhtml_branch_coverage=1 00:22:08.407 --rc genhtml_function_coverage=1 00:22:08.407 --rc genhtml_legend=1 00:22:08.407 --rc geninfo_all_blocks=1 00:22:08.407 --rc geninfo_unexecuted_blocks=1 00:22:08.407 00:22:08.407 ' 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:08.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.407 --rc genhtml_branch_coverage=1 00:22:08.407 --rc genhtml_function_coverage=1 00:22:08.407 --rc genhtml_legend=1 00:22:08.407 --rc geninfo_all_blocks=1 00:22:08.407 --rc geninfo_unexecuted_blocks=1 00:22:08.407 00:22:08.407 ' 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:08.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.407 --rc genhtml_branch_coverage=1 00:22:08.407 --rc genhtml_function_coverage=1 00:22:08.407 --rc genhtml_legend=1 00:22:08.407 --rc geninfo_all_blocks=1 00:22:08.407 --rc geninfo_unexecuted_blocks=1 00:22:08.407 00:22:08.407 ' 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:08.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.407 --rc genhtml_branch_coverage=1 00:22:08.407 --rc genhtml_function_coverage=1 00:22:08.407 --rc genhtml_legend=1 00:22:08.407 --rc geninfo_all_blocks=1 00:22:08.407 --rc geninfo_unexecuted_blocks=1 00:22:08.407 00:22:08.407 ' 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:08.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:08.407 08:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:13.680 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:13.680 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:13.680 Found net devices under 0000:86:00.0: cvl_0_0 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:13.680 Found net devices under 0000:86:00.1: cvl_0_1 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:13.680 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:13.681 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.681 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.681 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:13.681 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:13.681 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:13.681 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:13.681 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:13.681 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:13.681 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:13.681 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.681 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:13.681 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:13.681 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:13.681 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:13.681 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:13.681 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:13.681 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:13.681 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:13.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:22:13.940 00:22:13.940 --- 10.0.0.2 ping statistics --- 00:22:13.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.940 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:13.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:22:13.940 00:22:13.940 --- 10.0.0.1 ping statistics --- 00:22:13.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.940 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2522204 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2522204 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2522204 ']' 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.940 08:05:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:13.940 [2024-11-27 08:05:07.914806] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:22:13.940 [2024-11-27 08:05:07.914849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.940 [2024-11-27 08:05:07.981835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:13.940 [2024-11-27 08:05:08.025255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.940 [2024-11-27 08:05:08.025295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.940 [2024-11-27 08:05:08.025303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.940 [2024-11-27 08:05:08.025309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.940 [2024-11-27 08:05:08.025314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.940 [2024-11-27 08:05:08.026918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.940 [2024-11-27 08:05:08.027018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.940 [2024-11-27 08:05:08.027041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:13.940 [2024-11-27 08:05:08.027042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:14.200 [2024-11-27 08:05:08.166212] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:14.200 Malloc0 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:14.200 [2024-11-27 08:05:08.228388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:14.200 [ 00:22:14.200 { 00:22:14.200 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:14.200 "subtype": "Discovery", 00:22:14.200 "listen_addresses": [], 00:22:14.200 "allow_any_host": true, 00:22:14.200 "hosts": [] 00:22:14.200 }, 00:22:14.200 { 00:22:14.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.200 "subtype": "NVMe", 00:22:14.200 "listen_addresses": [ 00:22:14.200 { 00:22:14.200 "trtype": "TCP", 00:22:14.200 "adrfam": "IPv4", 00:22:14.200 "traddr": "10.0.0.2", 00:22:14.200 "trsvcid": "4420" 00:22:14.200 } 00:22:14.200 ], 00:22:14.200 "allow_any_host": true, 00:22:14.200 "hosts": [], 00:22:14.200 "serial_number": "SPDK00000000000001", 00:22:14.200 "model_number": "SPDK bdev Controller", 00:22:14.200 "max_namespaces": 2, 00:22:14.200 "min_cntlid": 1, 00:22:14.200 "max_cntlid": 65519, 00:22:14.200 "namespaces": [ 00:22:14.200 { 00:22:14.200 "nsid": 1, 00:22:14.200 "bdev_name": "Malloc0", 00:22:14.200 "name": "Malloc0", 00:22:14.200 "nguid": "60F94C888CFC4C019695806FC4F63FFF", 00:22:14.200 "uuid": "60f94c88-8cfc-4c01-9695-806fc4f63fff" 00:22:14.200 } 00:22:14.200 ] 00:22:14.200 } 00:22:14.200 ] 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2522398 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:14.200 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:14.459 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:14.459 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:14.459 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:14.459 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:14.459 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:14.459 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:22:14.459 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:22:14.459 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:14.459 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:14.459 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:14.459 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:14.459 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:14.718 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.718 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:14.718 Malloc1 00:22:14.718 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.718 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:14.718 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.718 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:14.718 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.718 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:14.718 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.718 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:14.718 Asynchronous Event Request test 00:22:14.718 Attaching to 10.0.0.2 00:22:14.718 Attached to 10.0.0.2 00:22:14.718 Registering asynchronous event callbacks... 00:22:14.718 Starting namespace attribute notice tests for all controllers... 00:22:14.718 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:14.718 aer_cb - Changed Namespace 00:22:14.718 Cleaning up... 00:22:14.718 [ 00:22:14.718 { 00:22:14.718 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:14.718 "subtype": "Discovery", 00:22:14.718 "listen_addresses": [], 00:22:14.718 "allow_any_host": true, 00:22:14.718 "hosts": [] 00:22:14.718 }, 00:22:14.718 { 00:22:14.718 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.718 "subtype": "NVMe", 00:22:14.719 "listen_addresses": [ 00:22:14.719 { 00:22:14.719 "trtype": "TCP", 00:22:14.719 "adrfam": "IPv4", 00:22:14.719 "traddr": "10.0.0.2", 00:22:14.719 "trsvcid": "4420" 00:22:14.719 } 00:22:14.719 ], 00:22:14.719 "allow_any_host": true, 00:22:14.719 "hosts": [], 00:22:14.719 "serial_number": "SPDK00000000000001", 00:22:14.719 "model_number": "SPDK bdev Controller", 00:22:14.719 "max_namespaces": 2, 00:22:14.719 "min_cntlid": 1, 00:22:14.719 "max_cntlid": 65519, 00:22:14.719 "namespaces": [ 00:22:14.719 { 00:22:14.719 "nsid": 1, 00:22:14.719 "bdev_name": "Malloc0", 00:22:14.719 "name": "Malloc0", 00:22:14.719 "nguid": "60F94C888CFC4C019695806FC4F63FFF", 00:22:14.719 "uuid": "60f94c88-8cfc-4c01-9695-806fc4f63fff" 00:22:14.719 }, 00:22:14.719 { 00:22:14.719 "nsid": 2, 00:22:14.719 "bdev_name": "Malloc1", 00:22:14.719 "name": "Malloc1", 00:22:14.719 "nguid": "BD9AEF9377D7485F92D38360B0F7C9E9", 00:22:14.719 "uuid": "bd9aef93-77d7-485f-92d3-8360b0f7c9e9" 00:22:14.719 } 00:22:14.719 ] 00:22:14.719 } 00:22:14.719 ] 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2522398 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:14.719 rmmod nvme_tcp 00:22:14.719 rmmod nvme_fabrics 00:22:14.719 rmmod nvme_keyring 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2522204 ']' 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2522204 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2522204 ']' 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2522204 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2522204 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2522204' 00:22:14.719 killing process with pid 2522204 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2522204 00:22:14.719 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2522204 00:22:14.979 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:14.979 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:14.979 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:14.979 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:14.979 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:14.979 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:14.979 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:14.979 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:14.979 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:14.979 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.979 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.979 08:05:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:17.528 00:22:17.528 real 0m8.758s 00:22:17.528 user 0m5.306s 00:22:17.528 sys 0m4.455s 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:17.528 ************************************ 00:22:17.528 END TEST nvmf_aer 00:22:17.528 ************************************ 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.528 ************************************ 00:22:17.528 START TEST nvmf_async_init 00:22:17.528 ************************************ 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:17.528 * Looking for test storage... 00:22:17.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:17.528 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:17.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.529 --rc genhtml_branch_coverage=1 00:22:17.529 --rc genhtml_function_coverage=1 00:22:17.529 --rc genhtml_legend=1 00:22:17.529 --rc geninfo_all_blocks=1 00:22:17.529 --rc geninfo_unexecuted_blocks=1 00:22:17.529 00:22:17.529 ' 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:17.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.529 --rc genhtml_branch_coverage=1 00:22:17.529 --rc genhtml_function_coverage=1 00:22:17.529 --rc genhtml_legend=1 00:22:17.529 --rc geninfo_all_blocks=1 00:22:17.529 --rc geninfo_unexecuted_blocks=1 00:22:17.529 00:22:17.529 ' 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:17.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.529 --rc genhtml_branch_coverage=1 00:22:17.529 --rc genhtml_function_coverage=1 00:22:17.529 --rc genhtml_legend=1 00:22:17.529 --rc geninfo_all_blocks=1 00:22:17.529 --rc geninfo_unexecuted_blocks=1 00:22:17.529 00:22:17.529 ' 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:17.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.529 --rc genhtml_branch_coverage=1 00:22:17.529 --rc genhtml_function_coverage=1 00:22:17.529 --rc genhtml_legend=1 00:22:17.529 --rc geninfo_all_blocks=1 00:22:17.529 --rc geninfo_unexecuted_blocks=1 00:22:17.529 00:22:17.529 ' 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:17.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d4f9486eb5f24b87a9e703ecbc10a25b 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:17.529 08:05:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:22.800 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:22.800 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:22.800 Found net devices under 0000:86:00.0: cvl_0_0 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:22.800 Found net devices under 0000:86:00.1: cvl_0_1 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:22.800 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:22.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:22:22.800 00:22:22.800 --- 10.0.0.2 ping statistics --- 00:22:22.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.800 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:22:22.801 00:22:22.801 --- 10.0.0.1 ping statistics --- 00:22:22.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.801 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2525860 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2525860 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2525860 ']' 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:22.801 [2024-11-27 08:05:16.543322] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:22:22.801 [2024-11-27 08:05:16.543368] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.801 [2024-11-27 08:05:16.611059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.801 [2024-11-27 08:05:16.650423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.801 [2024-11-27 08:05:16.650461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.801 [2024-11-27 08:05:16.650468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.801 [2024-11-27 08:05:16.650476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.801 [2024-11-27 08:05:16.650482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.801 [2024-11-27 08:05:16.651037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:22.801 [2024-11-27 08:05:16.786701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:22.801 null0 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d4f9486eb5f24b87a9e703ecbc10a25b 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:22.801 [2024-11-27 08:05:16.826976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.801 08:05:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:23.061 nvme0n1 00:22:23.061 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.061 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:23.061 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.061 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:23.061 [ 00:22:23.061 { 00:22:23.061 "name": "nvme0n1", 00:22:23.061 "aliases": [ 00:22:23.061 "d4f9486e-b5f2-4b87-a9e7-03ecbc10a25b" 00:22:23.061 ], 00:22:23.061 "product_name": "NVMe disk", 00:22:23.061 "block_size": 512, 00:22:23.061 "num_blocks": 2097152, 00:22:23.061 "uuid": "d4f9486e-b5f2-4b87-a9e7-03ecbc10a25b", 00:22:23.061 "numa_id": 1, 00:22:23.061 "assigned_rate_limits": { 00:22:23.061 "rw_ios_per_sec": 0, 00:22:23.061 "rw_mbytes_per_sec": 0, 00:22:23.061 "r_mbytes_per_sec": 0, 00:22:23.061 "w_mbytes_per_sec": 0 00:22:23.061 }, 00:22:23.061 "claimed": false, 00:22:23.061 "zoned": false, 00:22:23.061 "supported_io_types": { 00:22:23.061 "read": true, 00:22:23.061 "write": true, 00:22:23.061 "unmap": false, 00:22:23.061 "flush": true, 00:22:23.061 "reset": true, 00:22:23.061 "nvme_admin": true, 00:22:23.061 "nvme_io": true, 00:22:23.061 "nvme_io_md": false, 00:22:23.061 "write_zeroes": true, 00:22:23.061 "zcopy": false, 00:22:23.061 "get_zone_info": false, 00:22:23.061 "zone_management": false, 00:22:23.061 "zone_append": false, 00:22:23.061 "compare": true, 00:22:23.061 "compare_and_write": true, 00:22:23.061 "abort": true, 00:22:23.061 "seek_hole": false, 00:22:23.061 "seek_data": false, 00:22:23.061 "copy": true, 00:22:23.061 "nvme_iov_md": false 00:22:23.061 }, 00:22:23.061 "memory_domains": [ 00:22:23.061 { 00:22:23.061 "dma_device_id": "system", 00:22:23.061 "dma_device_type": 1 00:22:23.061 } 00:22:23.061 ], 00:22:23.061 "driver_specific": { 00:22:23.061 "nvme": [ 00:22:23.061 { 00:22:23.061 "trid": { 00:22:23.061 "trtype": "TCP", 00:22:23.061 "adrfam": "IPv4", 00:22:23.061 "traddr": "10.0.0.2", 00:22:23.061 "trsvcid": "4420", 00:22:23.061 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:23.061 }, 00:22:23.061 "ctrlr_data": { 00:22:23.061 "cntlid": 1, 00:22:23.061 "vendor_id": "0x8086", 00:22:23.061 "model_number": "SPDK bdev Controller", 00:22:23.061 "serial_number": "00000000000000000000", 00:22:23.061 "firmware_revision": "25.01", 00:22:23.061 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:23.061 "oacs": { 00:22:23.061 "security": 0, 00:22:23.061 "format": 0, 00:22:23.061 "firmware": 0, 00:22:23.061 "ns_manage": 0 00:22:23.061 }, 00:22:23.061 "multi_ctrlr": true, 00:22:23.061 "ana_reporting": false 00:22:23.061 }, 00:22:23.061 "vs": { 00:22:23.061 "nvme_version": "1.3" 00:22:23.061 }, 00:22:23.061 "ns_data": { 00:22:23.061 "id": 1, 00:22:23.061 "can_share": true 00:22:23.061 } 00:22:23.061 } 00:22:23.061 ], 00:22:23.061 "mp_policy": "active_passive" 00:22:23.061 } 00:22:23.061 } 00:22:23.061 ] 00:22:23.061 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.061 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:23.061 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.061 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:23.061 [2024-11-27 08:05:17.075505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:23.061 [2024-11-27 08:05:17.075560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba3e20 (9): Bad file descriptor 00:22:23.321 [2024-11-27 08:05:17.207032] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:23.321 [ 00:22:23.321 { 00:22:23.321 "name": "nvme0n1", 00:22:23.321 "aliases": [ 00:22:23.321 "d4f9486e-b5f2-4b87-a9e7-03ecbc10a25b" 00:22:23.321 ], 00:22:23.321 "product_name": "NVMe disk", 00:22:23.321 "block_size": 512, 00:22:23.321 "num_blocks": 2097152, 00:22:23.321 "uuid": "d4f9486e-b5f2-4b87-a9e7-03ecbc10a25b", 00:22:23.321 "numa_id": 1, 00:22:23.321 "assigned_rate_limits": { 00:22:23.321 "rw_ios_per_sec": 0, 00:22:23.321 "rw_mbytes_per_sec": 0, 00:22:23.321 "r_mbytes_per_sec": 0, 00:22:23.321 "w_mbytes_per_sec": 0 00:22:23.321 }, 00:22:23.321 "claimed": false, 00:22:23.321 "zoned": false, 00:22:23.321 "supported_io_types": { 00:22:23.321 "read": true, 00:22:23.321 "write": true, 00:22:23.321 "unmap": false, 00:22:23.321 "flush": true, 00:22:23.321 "reset": true, 00:22:23.321 "nvme_admin": true, 00:22:23.321 "nvme_io": true, 00:22:23.321 "nvme_io_md": false, 00:22:23.321 "write_zeroes": true, 00:22:23.321 "zcopy": false, 00:22:23.321 "get_zone_info": false, 00:22:23.321 "zone_management": false, 00:22:23.321 "zone_append": false, 00:22:23.321 "compare": true, 00:22:23.321 "compare_and_write": true, 00:22:23.321 "abort": true, 00:22:23.321 "seek_hole": false, 00:22:23.321 "seek_data": false, 00:22:23.321 "copy": true, 00:22:23.321 "nvme_iov_md": false 00:22:23.321 }, 00:22:23.321 "memory_domains": [ 00:22:23.321 { 00:22:23.321 "dma_device_id": "system", 00:22:23.321 "dma_device_type": 1 00:22:23.321 } 00:22:23.321 ], 00:22:23.321 "driver_specific": { 00:22:23.321 "nvme": [ 00:22:23.321 { 00:22:23.321 "trid": { 00:22:23.321 "trtype": "TCP", 00:22:23.321 "adrfam": "IPv4", 00:22:23.321 "traddr": "10.0.0.2", 00:22:23.321 "trsvcid": "4420", 00:22:23.321 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:23.321 }, 00:22:23.321 "ctrlr_data": { 00:22:23.321 "cntlid": 2, 00:22:23.321 "vendor_id": "0x8086", 00:22:23.321 "model_number": "SPDK bdev Controller", 00:22:23.321 "serial_number": "00000000000000000000", 00:22:23.321 "firmware_revision": "25.01", 00:22:23.321 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:23.321 "oacs": { 00:22:23.321 "security": 0, 00:22:23.321 "format": 0, 00:22:23.321 "firmware": 0, 00:22:23.321 "ns_manage": 0 00:22:23.321 }, 00:22:23.321 "multi_ctrlr": true, 00:22:23.321 "ana_reporting": false 00:22:23.321 }, 00:22:23.321 "vs": { 00:22:23.321 "nvme_version": "1.3" 00:22:23.321 }, 00:22:23.321 "ns_data": { 00:22:23.321 "id": 1, 00:22:23.321 "can_share": true 00:22:23.321 } 00:22:23.321 } 00:22:23.321 ], 00:22:23.321 "mp_policy": "active_passive" 00:22:23.321 } 00:22:23.321 } 00:22:23.321 ] 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.DQPgKNEGL2 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.DQPgKNEGL2 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.DQPgKNEGL2 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:23.321 [2024-11-27 08:05:17.264079] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:23.321 [2024-11-27 08:05:17.264183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:23.321 [2024-11-27 08:05:17.280134] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:23.321 nvme0n1 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.321 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:23.321 [ 00:22:23.321 { 00:22:23.321 "name": "nvme0n1", 00:22:23.322 "aliases": [ 00:22:23.322 "d4f9486e-b5f2-4b87-a9e7-03ecbc10a25b" 00:22:23.322 ], 00:22:23.322 "product_name": "NVMe disk", 00:22:23.322 "block_size": 512, 00:22:23.322 "num_blocks": 2097152, 00:22:23.322 "uuid": "d4f9486e-b5f2-4b87-a9e7-03ecbc10a25b", 00:22:23.322 "numa_id": 1, 00:22:23.322 "assigned_rate_limits": { 00:22:23.322 "rw_ios_per_sec": 0, 00:22:23.322 "rw_mbytes_per_sec": 0, 00:22:23.322 "r_mbytes_per_sec": 0, 00:22:23.322 "w_mbytes_per_sec": 0 00:22:23.322 }, 00:22:23.322 "claimed": false, 00:22:23.322 "zoned": false, 00:22:23.322 "supported_io_types": { 00:22:23.322 "read": true, 00:22:23.322 "write": true, 00:22:23.322 "unmap": false, 00:22:23.322 "flush": true, 00:22:23.322 "reset": true, 00:22:23.322 "nvme_admin": true, 00:22:23.322 "nvme_io": true, 00:22:23.322 "nvme_io_md": false, 00:22:23.322 "write_zeroes": true, 00:22:23.322 "zcopy": false, 00:22:23.322 "get_zone_info": false, 00:22:23.322 "zone_management": false, 00:22:23.322 "zone_append": false, 00:22:23.322 "compare": true, 00:22:23.322 "compare_and_write": true, 00:22:23.322 "abort": true, 00:22:23.322 "seek_hole": false, 00:22:23.322 "seek_data": false, 00:22:23.322 "copy": true, 00:22:23.322 "nvme_iov_md": false 00:22:23.322 }, 00:22:23.322 "memory_domains": [ 00:22:23.322 { 00:22:23.322 "dma_device_id": "system", 00:22:23.322 "dma_device_type": 1 00:22:23.322 } 00:22:23.322 ], 00:22:23.322 "driver_specific": { 00:22:23.322 "nvme": [ 00:22:23.322 { 00:22:23.322 "trid": { 00:22:23.322 "trtype": "TCP", 00:22:23.322 "adrfam": "IPv4", 00:22:23.322 "traddr": "10.0.0.2", 00:22:23.322 "trsvcid": "4421", 00:22:23.322 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:23.322 }, 00:22:23.322 "ctrlr_data": { 00:22:23.322 "cntlid": 3, 00:22:23.322 "vendor_id": "0x8086", 00:22:23.322 "model_number": "SPDK bdev Controller", 00:22:23.322 "serial_number": "00000000000000000000", 00:22:23.322 "firmware_revision": "25.01", 00:22:23.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:23.322 "oacs": { 00:22:23.322 "security": 0, 00:22:23.322 "format": 0, 00:22:23.322 "firmware": 0, 00:22:23.322 "ns_manage": 0 00:22:23.322 }, 00:22:23.322 "multi_ctrlr": true, 00:22:23.322 "ana_reporting": false 00:22:23.322 }, 00:22:23.322 "vs": { 00:22:23.322 "nvme_version": "1.3" 00:22:23.322 }, 00:22:23.322 "ns_data": { 00:22:23.322 "id": 1, 00:22:23.322 "can_share": true 00:22:23.322 } 00:22:23.322 } 00:22:23.322 ], 00:22:23.322 "mp_policy": "active_passive" 00:22:23.322 } 00:22:23.322 } 00:22:23.322 ] 00:22:23.322 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.322 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.322 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.322 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:23.322 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.322 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.DQPgKNEGL2 00:22:23.322 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:23.322 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:23.322 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:23.322 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:23.322 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:23.322 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:23.322 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:23.322 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:23.322 rmmod nvme_tcp 00:22:23.322 rmmod nvme_fabrics 00:22:23.322 rmmod nvme_keyring 00:22:23.581 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:23.581 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:23.581 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:23.581 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2525860 ']' 00:22:23.581 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2525860 00:22:23.581 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2525860 ']' 00:22:23.581 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2525860 00:22:23.581 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:23.581 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.581 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2525860 00:22:23.581 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:23.581 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:23.581 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2525860' 00:22:23.581 killing process with pid 2525860 00:22:23.582 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2525860 00:22:23.582 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2525860 00:22:23.582 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:23.582 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:23.582 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:23.582 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:23.582 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:23.582 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:23.582 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:23.582 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:23.582 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:23.582 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.582 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.582 08:05:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:26.118 00:22:26.118 real 0m8.605s 00:22:26.118 user 0m2.676s 00:22:26.118 sys 0m4.300s 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:26.118 ************************************ 00:22:26.118 END TEST nvmf_async_init 00:22:26.118 ************************************ 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.118 ************************************ 00:22:26.118 START TEST dma 00:22:26.118 ************************************ 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:26.118 * Looking for test storage... 00:22:26.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:26.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.118 --rc genhtml_branch_coverage=1 00:22:26.118 --rc genhtml_function_coverage=1 00:22:26.118 --rc genhtml_legend=1 00:22:26.118 --rc geninfo_all_blocks=1 00:22:26.118 --rc geninfo_unexecuted_blocks=1 00:22:26.118 00:22:26.118 ' 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:26.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.118 --rc genhtml_branch_coverage=1 00:22:26.118 --rc genhtml_function_coverage=1 00:22:26.118 --rc genhtml_legend=1 00:22:26.118 --rc geninfo_all_blocks=1 00:22:26.118 --rc geninfo_unexecuted_blocks=1 00:22:26.118 00:22:26.118 ' 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:26.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.118 --rc genhtml_branch_coverage=1 00:22:26.118 --rc genhtml_function_coverage=1 00:22:26.118 --rc genhtml_legend=1 00:22:26.118 --rc geninfo_all_blocks=1 00:22:26.118 --rc geninfo_unexecuted_blocks=1 00:22:26.118 00:22:26.118 ' 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:26.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.118 --rc genhtml_branch_coverage=1 00:22:26.118 --rc genhtml_function_coverage=1 00:22:26.118 --rc genhtml_legend=1 00:22:26.118 --rc geninfo_all_blocks=1 00:22:26.118 --rc geninfo_unexecuted_blocks=1 00:22:26.118 00:22:26.118 ' 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.118 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:26.119 00:22:26.119 real 0m0.197s 00:22:26.119 user 0m0.108s 00:22:26.119 sys 0m0.103s 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.119 08:05:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:26.119 ************************************ 00:22:26.119 END TEST dma 00:22:26.119 ************************************ 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.119 ************************************ 00:22:26.119 START TEST nvmf_identify 00:22:26.119 ************************************ 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:26.119 * Looking for test storage... 00:22:26.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:26.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.119 --rc genhtml_branch_coverage=1 00:22:26.119 --rc genhtml_function_coverage=1 00:22:26.119 --rc genhtml_legend=1 00:22:26.119 --rc geninfo_all_blocks=1 00:22:26.119 --rc geninfo_unexecuted_blocks=1 00:22:26.119 00:22:26.119 ' 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:26.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.119 --rc genhtml_branch_coverage=1 00:22:26.119 --rc genhtml_function_coverage=1 00:22:26.119 --rc genhtml_legend=1 00:22:26.119 --rc geninfo_all_blocks=1 00:22:26.119 --rc geninfo_unexecuted_blocks=1 00:22:26.119 00:22:26.119 ' 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:26.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.119 --rc genhtml_branch_coverage=1 00:22:26.119 --rc genhtml_function_coverage=1 00:22:26.119 --rc genhtml_legend=1 00:22:26.119 --rc geninfo_all_blocks=1 00:22:26.119 --rc geninfo_unexecuted_blocks=1 00:22:26.119 00:22:26.119 ' 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:26.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.119 --rc genhtml_branch_coverage=1 00:22:26.119 --rc genhtml_function_coverage=1 00:22:26.119 --rc genhtml_legend=1 00:22:26.119 --rc geninfo_all_blocks=1 00:22:26.119 --rc geninfo_unexecuted_blocks=1 00:22:26.119 00:22:26.119 ' 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.119 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:26.120 08:05:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:31.519 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:31.519 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:31.519 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:31.520 Found net devices under 0000:86:00.0: cvl_0_0 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:31.520 Found net devices under 0000:86:00.1: cvl_0_1 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:31.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:22:31.520 00:22:31.520 --- 10.0.0.2 ping statistics --- 00:22:31.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.520 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:22:31.520 00:22:31.520 --- 10.0.0.1 ping statistics --- 00:22:31.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.520 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2529526 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2529526 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2529526 ']' 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.520 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.520 [2024-11-27 08:05:25.607610] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:22:31.520 [2024-11-27 08:05:25.607660] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.779 [2024-11-27 08:05:25.674854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:31.779 [2024-11-27 08:05:25.719852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.779 [2024-11-27 08:05:25.719887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.779 [2024-11-27 08:05:25.719894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.779 [2024-11-27 08:05:25.719900] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.779 [2024-11-27 08:05:25.719905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.779 [2024-11-27 08:05:25.721538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.779 [2024-11-27 08:05:25.721638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.779 [2024-11-27 08:05:25.721715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.779 [2024-11-27 08:05:25.721717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.779 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.779 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:31.779 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:31.779 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.779 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.779 [2024-11-27 08:05:25.824681] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.779 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.779 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:31.779 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:31.779 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.779 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:31.779 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.779 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.040 Malloc0 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.040 [2024-11-27 08:05:25.926473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.040 [ 00:22:32.040 { 00:22:32.040 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:32.040 "subtype": "Discovery", 00:22:32.040 "listen_addresses": [ 00:22:32.040 { 00:22:32.040 "trtype": "TCP", 00:22:32.040 "adrfam": "IPv4", 00:22:32.040 "traddr": "10.0.0.2", 00:22:32.040 "trsvcid": "4420" 00:22:32.040 } 00:22:32.040 ], 00:22:32.040 "allow_any_host": true, 00:22:32.040 "hosts": [] 00:22:32.040 }, 00:22:32.040 { 00:22:32.040 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.040 "subtype": "NVMe", 00:22:32.040 "listen_addresses": [ 00:22:32.040 { 00:22:32.040 "trtype": "TCP", 00:22:32.040 "adrfam": "IPv4", 00:22:32.040 "traddr": "10.0.0.2", 00:22:32.040 "trsvcid": "4420" 00:22:32.040 } 00:22:32.040 ], 00:22:32.040 "allow_any_host": true, 00:22:32.040 "hosts": [], 00:22:32.040 "serial_number": "SPDK00000000000001", 00:22:32.040 "model_number": "SPDK bdev Controller", 00:22:32.040 "max_namespaces": 32, 00:22:32.040 "min_cntlid": 1, 00:22:32.040 "max_cntlid": 65519, 00:22:32.040 "namespaces": [ 00:22:32.040 { 00:22:32.040 "nsid": 1, 00:22:32.040 "bdev_name": "Malloc0", 00:22:32.040 "name": "Malloc0", 00:22:32.040 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:32.040 "eui64": "ABCDEF0123456789", 00:22:32.040 "uuid": "2121dc0b-9847-43b3-9507-43c32205cd1b" 00:22:32.040 } 00:22:32.040 ] 00:22:32.040 } 00:22:32.040 ] 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.040 08:05:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:32.040 [2024-11-27 08:05:25.978031] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:22:32.040 [2024-11-27 08:05:25.978067] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2529555 ] 00:22:32.040 [2024-11-27 08:05:26.017909] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:32.040 [2024-11-27 08:05:26.021966] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:32.040 [2024-11-27 08:05:26.021974] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:32.040 [2024-11-27 08:05:26.021989] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:32.040 [2024-11-27 08:05:26.021997] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:32.040 [2024-11-27 08:05:26.022444] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:32.040 [2024-11-27 08:05:26.022479] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c86690 0 00:22:32.040 [2024-11-27 08:05:26.028959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:32.040 [2024-11-27 08:05:26.028974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:32.040 [2024-11-27 08:05:26.028979] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:32.040 [2024-11-27 08:05:26.028982] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:32.040 [2024-11-27 08:05:26.029016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.040 [2024-11-27 08:05:26.029022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.040 [2024-11-27 08:05:26.029026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c86690) 00:22:32.040 [2024-11-27 08:05:26.029039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:32.040 [2024-11-27 08:05:26.029057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8100, cid 0, qid 0 00:22:32.040 [2024-11-27 08:05:26.035957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.040 [2024-11-27 08:05:26.035967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.040 [2024-11-27 08:05:26.035971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.035975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8100) on tqpair=0x1c86690 00:22:32.041 [2024-11-27 08:05:26.035985] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:32.041 [2024-11-27 08:05:26.035991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:32.041 [2024-11-27 08:05:26.035996] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:32.041 [2024-11-27 08:05:26.036011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c86690) 00:22:32.041 [2024-11-27 08:05:26.036026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.041 [2024-11-27 08:05:26.036040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8100, cid 0, qid 0 00:22:32.041 [2024-11-27 08:05:26.036127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.041 [2024-11-27 08:05:26.036133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.041 [2024-11-27 08:05:26.036136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8100) on tqpair=0x1c86690 00:22:32.041 [2024-11-27 08:05:26.036147] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:32.041 [2024-11-27 08:05:26.036154] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:32.041 [2024-11-27 08:05:26.036160] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036167] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c86690) 00:22:32.041 [2024-11-27 08:05:26.036175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.041 [2024-11-27 08:05:26.036186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8100, cid 0, qid 0 00:22:32.041 [2024-11-27 08:05:26.036248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.041 [2024-11-27 08:05:26.036254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.041 [2024-11-27 08:05:26.036257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8100) on tqpair=0x1c86690 00:22:32.041 [2024-11-27 08:05:26.036266] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:32.041 [2024-11-27 08:05:26.036273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:32.041 [2024-11-27 08:05:26.036278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c86690) 00:22:32.041 [2024-11-27 08:05:26.036290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.041 [2024-11-27 08:05:26.036300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8100, cid 0, qid 0 00:22:32.041 [2024-11-27 08:05:26.036363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.041 [2024-11-27 08:05:26.036369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.041 [2024-11-27 08:05:26.036372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8100) on tqpair=0x1c86690 00:22:32.041 [2024-11-27 08:05:26.036380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:32.041 [2024-11-27 08:05:26.036388] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036392] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c86690) 00:22:32.041 [2024-11-27 08:05:26.036400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.041 [2024-11-27 08:05:26.036410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8100, cid 0, qid 0 00:22:32.041 [2024-11-27 08:05:26.036473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.041 [2024-11-27 08:05:26.036479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.041 [2024-11-27 08:05:26.036482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8100) on tqpair=0x1c86690 00:22:32.041 [2024-11-27 08:05:26.036489] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:32.041 [2024-11-27 08:05:26.036494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:32.041 [2024-11-27 08:05:26.036500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:32.041 [2024-11-27 08:05:26.036608] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:32.041 [2024-11-27 08:05:26.036612] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:32.041 [2024-11-27 08:05:26.036622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c86690) 00:22:32.041 [2024-11-27 08:05:26.036634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.041 [2024-11-27 08:05:26.036644] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8100, cid 0, qid 0 00:22:32.041 [2024-11-27 08:05:26.036707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.041 [2024-11-27 08:05:26.036713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.041 [2024-11-27 08:05:26.036716] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8100) on tqpair=0x1c86690 00:22:32.041 [2024-11-27 08:05:26.036724] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:32.041 [2024-11-27 08:05:26.036732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c86690) 00:22:32.041 [2024-11-27 08:05:26.036744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.041 [2024-11-27 08:05:26.036754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8100, cid 0, qid 0 00:22:32.041 [2024-11-27 08:05:26.036817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.041 [2024-11-27 08:05:26.036822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.041 [2024-11-27 08:05:26.036826] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8100) on tqpair=0x1c86690 00:22:32.041 [2024-11-27 08:05:26.036833] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:32.041 [2024-11-27 08:05:26.036837] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:32.041 [2024-11-27 08:05:26.036844] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:32.041 [2024-11-27 08:05:26.036853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:32.041 [2024-11-27 08:05:26.036861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.036865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c86690) 00:22:32.041 [2024-11-27 08:05:26.036870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.041 [2024-11-27 08:05:26.036880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8100, cid 0, qid 0 00:22:32.041 [2024-11-27 08:05:26.037008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:32.041 [2024-11-27 08:05:26.037014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:32.041 [2024-11-27 08:05:26.037017] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:32.041 [2024-11-27 08:05:26.037020] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c86690): datao=0, datal=4096, cccid=0 00:22:32.042 [2024-11-27 08:05:26.037024] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce8100) on tqpair(0x1c86690): expected_datao=0, payload_size=4096 00:22:32.042 [2024-11-27 08:05:26.037031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037038] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037042] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.042 [2024-11-27 08:05:26.037056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.042 [2024-11-27 08:05:26.037059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8100) on tqpair=0x1c86690 00:22:32.042 [2024-11-27 08:05:26.037070] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:32.042 [2024-11-27 08:05:26.037076] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:32.042 [2024-11-27 08:05:26.037080] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:32.042 [2024-11-27 08:05:26.037086] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:32.042 [2024-11-27 08:05:26.037090] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:32.042 [2024-11-27 08:05:26.037094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:32.042 [2024-11-27 08:05:26.037102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:32.042 [2024-11-27 08:05:26.037109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c86690) 00:22:32.042 [2024-11-27 08:05:26.037121] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:32.042 [2024-11-27 08:05:26.037132] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8100, cid 0, qid 0 00:22:32.042 [2024-11-27 08:05:26.037195] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.042 [2024-11-27 08:05:26.037201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.042 [2024-11-27 08:05:26.037204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8100) on tqpair=0x1c86690 00:22:32.042 [2024-11-27 08:05:26.037213] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c86690) 00:22:32.042 [2024-11-27 08:05:26.037225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.042 [2024-11-27 08:05:26.037231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037234] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c86690) 00:22:32.042 [2024-11-27 08:05:26.037242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.042 [2024-11-27 08:05:26.037247] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c86690) 00:22:32.042 [2024-11-27 08:05:26.037261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.042 [2024-11-27 08:05:26.037266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.042 [2024-11-27 08:05:26.037277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.042 [2024-11-27 08:05:26.037282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:32.042 [2024-11-27 08:05:26.037292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:32.042 [2024-11-27 08:05:26.037297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037301] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c86690) 00:22:32.042 [2024-11-27 08:05:26.037306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.042 [2024-11-27 08:05:26.037317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8100, cid 0, qid 0 00:22:32.042 [2024-11-27 08:05:26.037322] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8280, cid 1, qid 0 00:22:32.042 [2024-11-27 08:05:26.037326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8400, cid 2, qid 0 00:22:32.042 [2024-11-27 08:05:26.037330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.042 [2024-11-27 08:05:26.037334] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8700, cid 4, qid 0 00:22:32.042 [2024-11-27 08:05:26.037432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.042 [2024-11-27 08:05:26.037438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.042 [2024-11-27 08:05:26.037441] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8700) on tqpair=0x1c86690 00:22:32.042 [2024-11-27 08:05:26.037448] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:32.042 [2024-11-27 08:05:26.037453] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:32.042 [2024-11-27 08:05:26.037462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c86690) 00:22:32.042 [2024-11-27 08:05:26.037471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.042 [2024-11-27 08:05:26.037481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8700, cid 4, qid 0 00:22:32.042 [2024-11-27 08:05:26.037550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:32.042 [2024-11-27 08:05:26.037555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:32.042 [2024-11-27 08:05:26.037558] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037561] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c86690): datao=0, datal=4096, cccid=4 00:22:32.042 [2024-11-27 08:05:26.037565] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce8700) on tqpair(0x1c86690): expected_datao=0, payload_size=4096 00:22:32.042 [2024-11-27 08:05:26.037569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037587] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037591] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:32.042 [2024-11-27 08:05:26.037624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.042 [2024-11-27 08:05:26.037630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.042 [2024-11-27 08:05:26.037633] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.037636] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8700) on tqpair=0x1c86690 00:22:32.043 [2024-11-27 08:05:26.037646] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:32.043 [2024-11-27 08:05:26.037668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.037673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c86690) 00:22:32.043 [2024-11-27 08:05:26.037678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.043 [2024-11-27 08:05:26.037684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.037687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.037691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c86690) 00:22:32.043 [2024-11-27 08:05:26.037696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.043 [2024-11-27 08:05:26.037709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8700, cid 4, qid 0 00:22:32.043 [2024-11-27 08:05:26.037714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8880, cid 5, qid 0 00:22:32.043 [2024-11-27 08:05:26.037812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:32.043 [2024-11-27 08:05:26.037818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:32.043 [2024-11-27 08:05:26.037821] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.037824] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c86690): datao=0, datal=1024, cccid=4 00:22:32.043 [2024-11-27 08:05:26.037827] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce8700) on tqpair(0x1c86690): expected_datao=0, payload_size=1024 00:22:32.043 [2024-11-27 08:05:26.037831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.037837] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.037840] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.037845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.043 [2024-11-27 08:05:26.037850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.043 [2024-11-27 08:05:26.037852] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.037856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8880) on tqpair=0x1c86690 00:22:32.043 [2024-11-27 08:05:26.080957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.043 [2024-11-27 08:05:26.080968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.043 [2024-11-27 08:05:26.080972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.080975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8700) on tqpair=0x1c86690 00:22:32.043 [2024-11-27 08:05:26.080987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.080991] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c86690) 00:22:32.043 [2024-11-27 08:05:26.080998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.043 [2024-11-27 08:05:26.081015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8700, cid 4, qid 0 00:22:32.043 [2024-11-27 08:05:26.081094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:32.043 [2024-11-27 08:05:26.081101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:32.043 [2024-11-27 08:05:26.081106] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.081110] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c86690): datao=0, datal=3072, cccid=4 00:22:32.043 [2024-11-27 08:05:26.081114] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce8700) on tqpair(0x1c86690): expected_datao=0, payload_size=3072 00:22:32.043 [2024-11-27 08:05:26.081117] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.081123] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.081127] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.123025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.043 [2024-11-27 08:05:26.123035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.043 [2024-11-27 08:05:26.123038] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.123042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8700) on tqpair=0x1c86690 00:22:32.043 [2024-11-27 08:05:26.123052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.123056] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c86690) 00:22:32.043 [2024-11-27 08:05:26.123063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.043 [2024-11-27 08:05:26.123080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8700, cid 4, qid 0 00:22:32.043 [2024-11-27 08:05:26.123145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:32.043 [2024-11-27 08:05:26.123152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:32.043 [2024-11-27 08:05:26.123156] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.123160] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c86690): datao=0, datal=8, cccid=4 00:22:32.043 [2024-11-27 08:05:26.123165] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce8700) on tqpair(0x1c86690): expected_datao=0, payload_size=8 00:22:32.043 [2024-11-27 08:05:26.123170] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.123177] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:32.043 [2024-11-27 08:05:26.123182] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:32.309 [2024-11-27 08:05:26.164075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.309 [2024-11-27 08:05:26.164088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.309 [2024-11-27 08:05:26.164091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.309 [2024-11-27 08:05:26.164095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8700) on tqpair=0x1c86690 00:22:32.309 ===================================================== 00:22:32.309 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:32.309 ===================================================== 00:22:32.309 Controller Capabilities/Features 00:22:32.309 ================================ 00:22:32.309 Vendor ID: 0000 00:22:32.309 Subsystem Vendor ID: 0000 00:22:32.309 Serial Number: .................... 00:22:32.309 Model Number: ........................................ 00:22:32.309 Firmware Version: 25.01 00:22:32.309 Recommended Arb Burst: 0 00:22:32.309 IEEE OUI Identifier: 00 00 00 00:22:32.309 Multi-path I/O 00:22:32.309 May have multiple subsystem ports: No 00:22:32.309 May have multiple controllers: No 00:22:32.309 Associated with SR-IOV VF: No 00:22:32.309 Max Data Transfer Size: 131072 00:22:32.309 Max Number of Namespaces: 0 00:22:32.309 Max Number of I/O Queues: 1024 00:22:32.309 NVMe Specification Version (VS): 1.3 00:22:32.309 NVMe Specification Version (Identify): 1.3 00:22:32.309 Maximum Queue Entries: 128 00:22:32.309 Contiguous Queues Required: Yes 00:22:32.309 Arbitration Mechanisms Supported 00:22:32.309 Weighted Round Robin: Not Supported 00:22:32.309 Vendor Specific: Not Supported 00:22:32.309 Reset Timeout: 15000 ms 00:22:32.309 Doorbell Stride: 4 bytes 00:22:32.309 NVM Subsystem Reset: Not Supported 00:22:32.309 Command Sets Supported 00:22:32.309 NVM Command Set: Supported 00:22:32.309 Boot Partition: Not Supported 00:22:32.309 Memory Page Size Minimum: 4096 bytes 00:22:32.309 Memory Page Size Maximum: 4096 bytes 00:22:32.309 Persistent Memory Region: Not Supported 00:22:32.309 Optional Asynchronous Events Supported 00:22:32.309 Namespace Attribute Notices: Not Supported 00:22:32.309 Firmware Activation Notices: Not Supported 00:22:32.309 ANA Change Notices: Not Supported 00:22:32.309 PLE Aggregate Log Change Notices: Not Supported 00:22:32.309 LBA Status Info Alert Notices: Not Supported 00:22:32.309 EGE Aggregate Log Change Notices: Not Supported 00:22:32.309 Normal NVM Subsystem Shutdown event: Not Supported 00:22:32.309 Zone Descriptor Change Notices: Not Supported 00:22:32.309 Discovery Log Change Notices: Supported 00:22:32.309 Controller Attributes 00:22:32.309 128-bit Host Identifier: Not Supported 00:22:32.309 Non-Operational Permissive Mode: Not Supported 00:22:32.309 NVM Sets: Not Supported 00:22:32.309 Read Recovery Levels: Not Supported 00:22:32.309 Endurance Groups: Not Supported 00:22:32.309 Predictable Latency Mode: Not Supported 00:22:32.309 Traffic Based Keep ALive: Not Supported 00:22:32.309 Namespace Granularity: Not Supported 00:22:32.309 SQ Associations: Not Supported 00:22:32.309 UUID List: Not Supported 00:22:32.309 Multi-Domain Subsystem: Not Supported 00:22:32.309 Fixed Capacity Management: Not Supported 00:22:32.309 Variable Capacity Management: Not Supported 00:22:32.309 Delete Endurance Group: Not Supported 00:22:32.309 Delete NVM Set: Not Supported 00:22:32.309 Extended LBA Formats Supported: Not Supported 00:22:32.309 Flexible Data Placement Supported: Not Supported 00:22:32.309 00:22:32.309 Controller Memory Buffer Support 00:22:32.309 ================================ 00:22:32.309 Supported: No 00:22:32.309 00:22:32.309 Persistent Memory Region Support 00:22:32.309 ================================ 00:22:32.309 Supported: No 00:22:32.309 00:22:32.309 Admin Command Set Attributes 00:22:32.309 ============================ 00:22:32.310 Security Send/Receive: Not Supported 00:22:32.310 Format NVM: Not Supported 00:22:32.310 Firmware Activate/Download: Not Supported 00:22:32.310 Namespace Management: Not Supported 00:22:32.310 Device Self-Test: Not Supported 00:22:32.310 Directives: Not Supported 00:22:32.310 NVMe-MI: Not Supported 00:22:32.310 Virtualization Management: Not Supported 00:22:32.310 Doorbell Buffer Config: Not Supported 00:22:32.310 Get LBA Status Capability: Not Supported 00:22:32.310 Command & Feature Lockdown Capability: Not Supported 00:22:32.310 Abort Command Limit: 1 00:22:32.310 Async Event Request Limit: 4 00:22:32.310 Number of Firmware Slots: N/A 00:22:32.310 Firmware Slot 1 Read-Only: N/A 00:22:32.310 Firmware Activation Without Reset: N/A 00:22:32.310 Multiple Update Detection Support: N/A 00:22:32.310 Firmware Update Granularity: No Information Provided 00:22:32.310 Per-Namespace SMART Log: No 00:22:32.310 Asymmetric Namespace Access Log Page: Not Supported 00:22:32.310 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:32.310 Command Effects Log Page: Not Supported 00:22:32.310 Get Log Page Extended Data: Supported 00:22:32.310 Telemetry Log Pages: Not Supported 00:22:32.310 Persistent Event Log Pages: Not Supported 00:22:32.310 Supported Log Pages Log Page: May Support 00:22:32.310 Commands Supported & Effects Log Page: Not Supported 00:22:32.310 Feature Identifiers & Effects Log Page:May Support 00:22:32.310 NVMe-MI Commands & Effects Log Page: May Support 00:22:32.310 Data Area 4 for Telemetry Log: Not Supported 00:22:32.310 Error Log Page Entries Supported: 128 00:22:32.310 Keep Alive: Not Supported 00:22:32.310 00:22:32.310 NVM Command Set Attributes 00:22:32.310 ========================== 00:22:32.310 Submission Queue Entry Size 00:22:32.310 Max: 1 00:22:32.310 Min: 1 00:22:32.310 Completion Queue Entry Size 00:22:32.310 Max: 1 00:22:32.310 Min: 1 00:22:32.310 Number of Namespaces: 0 00:22:32.310 Compare Command: Not Supported 00:22:32.310 Write Uncorrectable Command: Not Supported 00:22:32.310 Dataset Management Command: Not Supported 00:22:32.310 Write Zeroes Command: Not Supported 00:22:32.310 Set Features Save Field: Not Supported 00:22:32.310 Reservations: Not Supported 00:22:32.310 Timestamp: Not Supported 00:22:32.310 Copy: Not Supported 00:22:32.310 Volatile Write Cache: Not Present 00:22:32.310 Atomic Write Unit (Normal): 1 00:22:32.310 Atomic Write Unit (PFail): 1 00:22:32.310 Atomic Compare & Write Unit: 1 00:22:32.310 Fused Compare & Write: Supported 00:22:32.310 Scatter-Gather List 00:22:32.310 SGL Command Set: Supported 00:22:32.310 SGL Keyed: Supported 00:22:32.310 SGL Bit Bucket Descriptor: Not Supported 00:22:32.310 SGL Metadata Pointer: Not Supported 00:22:32.310 Oversized SGL: Not Supported 00:22:32.310 SGL Metadata Address: Not Supported 00:22:32.310 SGL Offset: Supported 00:22:32.310 Transport SGL Data Block: Not Supported 00:22:32.310 Replay Protected Memory Block: Not Supported 00:22:32.310 00:22:32.310 Firmware Slot Information 00:22:32.310 ========================= 00:22:32.310 Active slot: 0 00:22:32.310 00:22:32.310 00:22:32.310 Error Log 00:22:32.310 ========= 00:22:32.310 00:22:32.310 Active Namespaces 00:22:32.310 ================= 00:22:32.310 Discovery Log Page 00:22:32.310 ================== 00:22:32.310 Generation Counter: 2 00:22:32.310 Number of Records: 2 00:22:32.310 Record Format: 0 00:22:32.310 00:22:32.310 Discovery Log Entry 0 00:22:32.310 ---------------------- 00:22:32.310 Transport Type: 3 (TCP) 00:22:32.310 Address Family: 1 (IPv4) 00:22:32.310 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:32.310 Entry Flags: 00:22:32.310 Duplicate Returned Information: 1 00:22:32.310 Explicit Persistent Connection Support for Discovery: 1 00:22:32.310 Transport Requirements: 00:22:32.310 Secure Channel: Not Required 00:22:32.310 Port ID: 0 (0x0000) 00:22:32.310 Controller ID: 65535 (0xffff) 00:22:32.310 Admin Max SQ Size: 128 00:22:32.310 Transport Service Identifier: 4420 00:22:32.310 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:32.310 Transport Address: 10.0.0.2 00:22:32.310 Discovery Log Entry 1 00:22:32.310 ---------------------- 00:22:32.310 Transport Type: 3 (TCP) 00:22:32.310 Address Family: 1 (IPv4) 00:22:32.310 Subsystem Type: 2 (NVM Subsystem) 00:22:32.310 Entry Flags: 00:22:32.310 Duplicate Returned Information: 0 00:22:32.310 Explicit Persistent Connection Support for Discovery: 0 00:22:32.310 Transport Requirements: 00:22:32.310 Secure Channel: Not Required 00:22:32.310 Port ID: 0 (0x0000) 00:22:32.310 Controller ID: 65535 (0xffff) 00:22:32.310 Admin Max SQ Size: 128 00:22:32.310 Transport Service Identifier: 4420 00:22:32.310 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:32.310 Transport Address: 10.0.0.2 [2024-11-27 08:05:26.164181] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:32.310 [2024-11-27 08:05:26.164192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8100) on tqpair=0x1c86690 00:22:32.310 [2024-11-27 08:05:26.164200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.310 [2024-11-27 08:05:26.164204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8280) on tqpair=0x1c86690 00:22:32.310 [2024-11-27 08:05:26.164209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.310 [2024-11-27 08:05:26.164213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8400) on tqpair=0x1c86690 00:22:32.310 [2024-11-27 08:05:26.164217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.310 [2024-11-27 08:05:26.164221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.310 [2024-11-27 08:05:26.164225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.310 [2024-11-27 08:05:26.164235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.310 [2024-11-27 08:05:26.164239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.164242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.311 [2024-11-27 08:05:26.164249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.311 [2024-11-27 08:05:26.164263] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.311 [2024-11-27 08:05:26.167957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.311 [2024-11-27 08:05:26.167965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.311 [2024-11-27 08:05:26.167968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.167972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.311 [2024-11-27 08:05:26.167978] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.167982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.167985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.311 [2024-11-27 08:05:26.167991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.311 [2024-11-27 08:05:26.168007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.311 [2024-11-27 08:05:26.168108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.311 [2024-11-27 08:05:26.168113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.311 [2024-11-27 08:05:26.168116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.311 [2024-11-27 08:05:26.168124] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:32.311 [2024-11-27 08:05:26.168128] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:32.311 [2024-11-27 08:05:26.168137] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.311 [2024-11-27 08:05:26.168149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.311 [2024-11-27 08:05:26.168160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.311 [2024-11-27 08:05:26.168222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.311 [2024-11-27 08:05:26.168228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.311 [2024-11-27 08:05:26.168231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.311 [2024-11-27 08:05:26.168243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.311 [2024-11-27 08:05:26.168255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.311 [2024-11-27 08:05:26.168264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.311 [2024-11-27 08:05:26.168327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.311 [2024-11-27 08:05:26.168332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.311 [2024-11-27 08:05:26.168338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.311 [2024-11-27 08:05:26.168351] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.311 [2024-11-27 08:05:26.168363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.311 [2024-11-27 08:05:26.168372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.311 [2024-11-27 08:05:26.168436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.311 [2024-11-27 08:05:26.168442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.311 [2024-11-27 08:05:26.168445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168448] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.311 [2024-11-27 08:05:26.168456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.311 [2024-11-27 08:05:26.168468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.311 [2024-11-27 08:05:26.168478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.311 [2024-11-27 08:05:26.168548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.311 [2024-11-27 08:05:26.168554] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.311 [2024-11-27 08:05:26.168556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.311 [2024-11-27 08:05:26.168569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.311 [2024-11-27 08:05:26.168581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.311 [2024-11-27 08:05:26.168590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.311 [2024-11-27 08:05:26.168655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.311 [2024-11-27 08:05:26.168660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.311 [2024-11-27 08:05:26.168663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.311 [2024-11-27 08:05:26.168674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168681] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.311 [2024-11-27 08:05:26.168687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.311 [2024-11-27 08:05:26.168696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.311 [2024-11-27 08:05:26.168770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.311 [2024-11-27 08:05:26.168775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.311 [2024-11-27 08:05:26.168778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.311 [2024-11-27 08:05:26.168792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.311 [2024-11-27 08:05:26.168804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.311 [2024-11-27 08:05:26.168814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.311 [2024-11-27 08:05:26.168890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.311 [2024-11-27 08:05:26.168895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.311 [2024-11-27 08:05:26.168898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.311 [2024-11-27 08:05:26.168909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.311 [2024-11-27 08:05:26.168916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.311 [2024-11-27 08:05:26.168922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.311 [2024-11-27 08:05:26.168931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.311 [2024-11-27 08:05:26.169007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.311 [2024-11-27 08:05:26.169013] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.311 [2024-11-27 08:05:26.169016] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.312 [2024-11-27 08:05:26.169027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.312 [2024-11-27 08:05:26.169039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.312 [2024-11-27 08:05:26.169049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.312 [2024-11-27 08:05:26.169122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.312 [2024-11-27 08:05:26.169128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.312 [2024-11-27 08:05:26.169131] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.312 [2024-11-27 08:05:26.169142] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.312 [2024-11-27 08:05:26.169154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.312 [2024-11-27 08:05:26.169163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.312 [2024-11-27 08:05:26.169233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.312 [2024-11-27 08:05:26.169238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.312 [2024-11-27 08:05:26.169241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.312 [2024-11-27 08:05:26.169255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.312 [2024-11-27 08:05:26.169267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.312 [2024-11-27 08:05:26.169277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.312 [2024-11-27 08:05:26.169334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.312 [2024-11-27 08:05:26.169340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.312 [2024-11-27 08:05:26.169343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.312 [2024-11-27 08:05:26.169354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169357] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169361] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.312 [2024-11-27 08:05:26.169366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.312 [2024-11-27 08:05:26.169375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.312 [2024-11-27 08:05:26.169433] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.312 [2024-11-27 08:05:26.169439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.312 [2024-11-27 08:05:26.169442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.312 [2024-11-27 08:05:26.169453] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169456] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.312 [2024-11-27 08:05:26.169465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.312 [2024-11-27 08:05:26.169474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.312 [2024-11-27 08:05:26.169533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.312 [2024-11-27 08:05:26.169539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.312 [2024-11-27 08:05:26.169542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.312 [2024-11-27 08:05:26.169553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.312 [2024-11-27 08:05:26.169565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.312 [2024-11-27 08:05:26.169574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.312 [2024-11-27 08:05:26.169634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.312 [2024-11-27 08:05:26.169640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.312 [2024-11-27 08:05:26.169643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169646] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.312 [2024-11-27 08:05:26.169654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.312 [2024-11-27 08:05:26.169668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.312 [2024-11-27 08:05:26.169677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.312 [2024-11-27 08:05:26.169734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.312 [2024-11-27 08:05:26.169739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.312 [2024-11-27 08:05:26.169742] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.312 [2024-11-27 08:05:26.169754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.312 [2024-11-27 08:05:26.169766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.312 [2024-11-27 08:05:26.169775] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.312 [2024-11-27 08:05:26.169851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.312 [2024-11-27 08:05:26.169857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.312 [2024-11-27 08:05:26.169860] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.312 [2024-11-27 08:05:26.169871] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.312 [2024-11-27 08:05:26.169883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.312 [2024-11-27 08:05:26.169892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.312 [2024-11-27 08:05:26.169953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.312 [2024-11-27 08:05:26.169960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.312 [2024-11-27 08:05:26.169962] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.312 [2024-11-27 08:05:26.169974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.312 [2024-11-27 08:05:26.169980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.312 [2024-11-27 08:05:26.169986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.313 [2024-11-27 08:05:26.169996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.313 [2024-11-27 08:05:26.170061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.313 [2024-11-27 08:05:26.170066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.313 [2024-11-27 08:05:26.170069] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170073] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.313 [2024-11-27 08:05:26.170081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.313 [2024-11-27 08:05:26.170095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.313 [2024-11-27 08:05:26.170104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.313 [2024-11-27 08:05:26.170168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.313 [2024-11-27 08:05:26.170174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.313 [2024-11-27 08:05:26.170177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170180] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.313 [2024-11-27 08:05:26.170188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170195] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.313 [2024-11-27 08:05:26.170200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.313 [2024-11-27 08:05:26.170210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.313 [2024-11-27 08:05:26.170267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.313 [2024-11-27 08:05:26.170272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.313 [2024-11-27 08:05:26.170276] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.313 [2024-11-27 08:05:26.170287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.313 [2024-11-27 08:05:26.170299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.313 [2024-11-27 08:05:26.170308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.313 [2024-11-27 08:05:26.170368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.313 [2024-11-27 08:05:26.170373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.313 [2024-11-27 08:05:26.170376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.313 [2024-11-27 08:05:26.170387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.313 [2024-11-27 08:05:26.170400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.313 [2024-11-27 08:05:26.170409] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.313 [2024-11-27 08:05:26.170468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.313 [2024-11-27 08:05:26.170474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.313 [2024-11-27 08:05:26.170477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.313 [2024-11-27 08:05:26.170488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.313 [2024-11-27 08:05:26.170502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.313 [2024-11-27 08:05:26.170511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.313 [2024-11-27 08:05:26.170588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.313 [2024-11-27 08:05:26.170593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.313 [2024-11-27 08:05:26.170596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170599] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.313 [2024-11-27 08:05:26.170607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170614] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.313 [2024-11-27 08:05:26.170619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.313 [2024-11-27 08:05:26.170629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.313 [2024-11-27 08:05:26.170705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.313 [2024-11-27 08:05:26.170710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.313 [2024-11-27 08:05:26.170713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.313 [2024-11-27 08:05:26.170724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.313 [2024-11-27 08:05:26.170737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.313 [2024-11-27 08:05:26.170746] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.313 [2024-11-27 08:05:26.170833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.313 [2024-11-27 08:05:26.170838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.313 [2024-11-27 08:05:26.170841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.313 [2024-11-27 08:05:26.170853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.313 [2024-11-27 08:05:26.170865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.313 [2024-11-27 08:05:26.170875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.313 [2024-11-27 08:05:26.170934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.313 [2024-11-27 08:05:26.170940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.313 [2024-11-27 08:05:26.170943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.313 [2024-11-27 08:05:26.170959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.170965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.313 [2024-11-27 08:05:26.170971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.313 [2024-11-27 08:05:26.170983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.313 [2024-11-27 08:05:26.171056] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.313 [2024-11-27 08:05:26.171062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.313 [2024-11-27 08:05:26.171065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.313 [2024-11-27 08:05:26.171068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.314 [2024-11-27 08:05:26.171076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.314 [2024-11-27 08:05:26.171079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.314 [2024-11-27 08:05:26.171082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.314 [2024-11-27 08:05:26.171088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.314 [2024-11-27 08:05:26.171098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.314 [2024-11-27 08:05:26.174956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.314 [2024-11-27 08:05:26.174964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.314 [2024-11-27 08:05:26.174967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.314 [2024-11-27 08:05:26.174971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.314 [2024-11-27 08:05:26.174981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.314 [2024-11-27 08:05:26.174985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.314 [2024-11-27 08:05:26.174988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c86690) 00:22:32.314 [2024-11-27 08:05:26.174994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.314 [2024-11-27 08:05:26.175005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce8580, cid 3, qid 0 00:22:32.314 [2024-11-27 08:05:26.175091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.314 [2024-11-27 08:05:26.175096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.314 [2024-11-27 08:05:26.175099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.314 [2024-11-27 08:05:26.175102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce8580) on tqpair=0x1c86690 00:22:32.314 [2024-11-27 08:05:26.175109] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:22:32.314 00:22:32.314 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:32.314 [2024-11-27 08:05:26.211188] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:22:32.314 [2024-11-27 08:05:26.211220] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2529592 ] 00:22:32.314 [2024-11-27 08:05:26.249634] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:32.314 [2024-11-27 08:05:26.249679] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:32.314 [2024-11-27 08:05:26.249684] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:32.314 [2024-11-27 08:05:26.249702] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:32.314 [2024-11-27 08:05:26.249710] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:32.314 [2024-11-27 08:05:26.253141] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:32.314 [2024-11-27 08:05:26.253173] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x223a690 0 00:22:32.314 [2024-11-27 08:05:26.253337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:32.314 [2024-11-27 08:05:26.253344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:32.314 [2024-11-27 08:05:26.253348] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:32.314 [2024-11-27 08:05:26.253350] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:32.314 [2024-11-27 08:05:26.253375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.314 [2024-11-27 08:05:26.253380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.314 [2024-11-27 08:05:26.253384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x223a690) 00:22:32.314 [2024-11-27 08:05:26.253394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:32.314 [2024-11-27 08:05:26.253407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c100, cid 0, qid 0 00:22:32.314 [2024-11-27 08:05:26.260959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.314 [2024-11-27 08:05:26.260968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.314 [2024-11-27 08:05:26.260971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.314 [2024-11-27 08:05:26.260975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c100) on tqpair=0x223a690 00:22:32.314 [2024-11-27 08:05:26.260986] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:32.314 [2024-11-27 08:05:26.260992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:32.314 [2024-11-27 08:05:26.260997] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:32.314 [2024-11-27 08:05:26.261010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.314 [2024-11-27 08:05:26.261013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.314 [2024-11-27 08:05:26.261017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x223a690) 00:22:32.314 [2024-11-27 08:05:26.261023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.314 [2024-11-27 08:05:26.261036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c100, cid 0, qid 0 00:22:32.314 [2024-11-27 08:05:26.261199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.314 [2024-11-27 08:05:26.261205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.314 [2024-11-27 08:05:26.261208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.314 [2024-11-27 08:05:26.261211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c100) on tqpair=0x223a690 00:22:32.314 [2024-11-27 08:05:26.261218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:32.314 [2024-11-27 08:05:26.261226] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:32.314 [2024-11-27 08:05:26.261233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.314 [2024-11-27 08:05:26.261236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.314 [2024-11-27 08:05:26.261239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x223a690) 00:22:32.314 [2024-11-27 08:05:26.261245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.314 [2024-11-27 08:05:26.261258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c100, cid 0, qid 0 00:22:32.314 [2024-11-27 08:05:26.261328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.314 [2024-11-27 08:05:26.261334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.314 [2024-11-27 08:05:26.261337] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.314 [2024-11-27 08:05:26.261340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c100) on tqpair=0x223a690 00:22:32.314 [2024-11-27 08:05:26.261344] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:32.314 [2024-11-27 08:05:26.261352] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:32.314 [2024-11-27 08:05:26.261357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.314 [2024-11-27 08:05:26.261361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.314 [2024-11-27 08:05:26.261364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x223a690) 00:22:32.314 [2024-11-27 08:05:26.261370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.314 [2024-11-27 08:05:26.261380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c100, cid 0, qid 0 00:22:32.314 [2024-11-27 08:05:26.261446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.314 [2024-11-27 08:05:26.261452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.314 [2024-11-27 08:05:26.261456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.314 [2024-11-27 08:05:26.261459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c100) on tqpair=0x223a690 00:22:32.314 [2024-11-27 08:05:26.261463] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:32.314 [2024-11-27 08:05:26.261472] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.315 [2024-11-27 08:05:26.261475] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.315 [2024-11-27 08:05:26.261478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x223a690) 00:22:32.315 [2024-11-27 08:05:26.261484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.315 [2024-11-27 08:05:26.261493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c100, cid 0, qid 0 00:22:32.315 [2024-11-27 08:05:26.261576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.315 [2024-11-27 08:05:26.261581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.315 [2024-11-27 08:05:26.261584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.315 [2024-11-27 08:05:26.261587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c100) on tqpair=0x223a690 00:22:32.315 [2024-11-27 08:05:26.261591] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:32.315 [2024-11-27 08:05:26.261595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:32.315 [2024-11-27 08:05:26.261603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:32.315 [2024-11-27 08:05:26.261710] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:32.315 [2024-11-27 08:05:26.261714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:32.315 [2024-11-27 08:05:26.261721] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.315 [2024-11-27 08:05:26.261724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.315 [2024-11-27 08:05:26.261727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x223a690) 00:22:32.315 [2024-11-27 08:05:26.261737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.315 [2024-11-27 08:05:26.261748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c100, cid 0, qid 0 00:22:32.315 [2024-11-27 08:05:26.261826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.315 [2024-11-27 08:05:26.261831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.315 [2024-11-27 08:05:26.261834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.315 [2024-11-27 08:05:26.261837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c100) on tqpair=0x223a690 00:22:32.315 [2024-11-27 08:05:26.261841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:32.315 [2024-11-27 08:05:26.261851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.315 [2024-11-27 08:05:26.261854] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.315 [2024-11-27 08:05:26.261857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x223a690) 00:22:32.315 [2024-11-27 08:05:26.261863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.315 [2024-11-27 08:05:26.261873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c100, cid 0, qid 0 00:22:32.315 [2024-11-27 08:05:26.261956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.315 [2024-11-27 08:05:26.261962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.315 [2024-11-27 08:05:26.261965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.315 [2024-11-27 08:05:26.261969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c100) on tqpair=0x223a690 00:22:32.315 [2024-11-27 08:05:26.261973] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:32.315 [2024-11-27 08:05:26.261977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:32.315 [2024-11-27 08:05:26.261983] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:32.315 [2024-11-27 08:05:26.261991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:32.315 [2024-11-27 08:05:26.261998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.315 [2024-11-27 08:05:26.262002] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x223a690) 00:22:32.315 [2024-11-27 08:05:26.262008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.315 [2024-11-27 08:05:26.262018] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c100, cid 0, qid 0 00:22:32.315 [2024-11-27 08:05:26.262123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:32.315 [2024-11-27 08:05:26.262129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:32.315 [2024-11-27 08:05:26.262132] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:32.315 [2024-11-27 08:05:26.262135] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x223a690): datao=0, datal=4096, cccid=0 00:22:32.315 [2024-11-27 08:05:26.262139] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x229c100) on tqpair(0x223a690): expected_datao=0, payload_size=4096 00:22:32.315 [2024-11-27 08:05:26.262143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.315 [2024-11-27 08:05:26.262149] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:32.315 [2024-11-27 08:05:26.262153] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:32.315 [2024-11-27 08:05:26.262164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.315 [2024-11-27 08:05:26.262178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.315 [2024-11-27 08:05:26.262181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.315 [2024-11-27 08:05:26.262185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c100) on tqpair=0x223a690 00:22:32.315 [2024-11-27 08:05:26.262191] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:32.315 [2024-11-27 08:05:26.262195] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:32.315 [2024-11-27 08:05:26.262199] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:32.315 [2024-11-27 08:05:26.262203] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:32.315 [2024-11-27 08:05:26.262207] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:32.315 [2024-11-27 08:05:26.262211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:32.315 [2024-11-27 08:05:26.262220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:32.315 [2024-11-27 08:05:26.262226] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.315 [2024-11-27 08:05:26.262230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x223a690) 00:22:32.316 [2024-11-27 08:05:26.262239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:32.316 [2024-11-27 08:05:26.262249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c100, cid 0, qid 0 00:22:32.316 [2024-11-27 08:05:26.262313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.316 [2024-11-27 08:05:26.262319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.316 [2024-11-27 08:05:26.262322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c100) on tqpair=0x223a690 00:22:32.316 [2024-11-27 08:05:26.262331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x223a690) 00:22:32.316 [2024-11-27 08:05:26.262342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.316 [2024-11-27 08:05:26.262347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x223a690) 00:22:32.316 [2024-11-27 08:05:26.262359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.316 [2024-11-27 08:05:26.262364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262370] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x223a690) 00:22:32.316 [2024-11-27 08:05:26.262375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.316 [2024-11-27 08:05:26.262380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x223a690) 00:22:32.316 [2024-11-27 08:05:26.262393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.316 [2024-11-27 08:05:26.262398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:32.316 [2024-11-27 08:05:26.262408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:32.316 [2024-11-27 08:05:26.262414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x223a690) 00:22:32.316 [2024-11-27 08:05:26.262423] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.316 [2024-11-27 08:05:26.262434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c100, cid 0, qid 0 00:22:32.316 [2024-11-27 08:05:26.262439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c280, cid 1, qid 0 00:22:32.316 [2024-11-27 08:05:26.262443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c400, cid 2, qid 0 00:22:32.316 [2024-11-27 08:05:26.262447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c580, cid 3, qid 0 00:22:32.316 [2024-11-27 08:05:26.262451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c700, cid 4, qid 0 00:22:32.316 [2024-11-27 08:05:26.262550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.316 [2024-11-27 08:05:26.262556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.316 [2024-11-27 08:05:26.262559] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262563] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c700) on tqpair=0x223a690 00:22:32.316 [2024-11-27 08:05:26.262567] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:32.316 [2024-11-27 08:05:26.262571] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:32.316 [2024-11-27 08:05:26.262581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:32.316 [2024-11-27 08:05:26.262587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:32.316 [2024-11-27 08:05:26.262593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x223a690) 00:22:32.316 [2024-11-27 08:05:26.262604] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:32.316 [2024-11-27 08:05:26.262614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c700, cid 4, qid 0 00:22:32.316 [2024-11-27 08:05:26.262681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.316 [2024-11-27 08:05:26.262687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.316 [2024-11-27 08:05:26.262690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c700) on tqpair=0x223a690 00:22:32.316 [2024-11-27 08:05:26.262746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:32.316 [2024-11-27 08:05:26.262756] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:32.316 [2024-11-27 08:05:26.262763] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x223a690) 00:22:32.316 [2024-11-27 08:05:26.262773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.316 [2024-11-27 08:05:26.262784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c700, cid 4, qid 0 00:22:32.316 [2024-11-27 08:05:26.262856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:32.316 [2024-11-27 08:05:26.262862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:32.316 [2024-11-27 08:05:26.262866] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262869] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x223a690): datao=0, datal=4096, cccid=4 00:22:32.316 [2024-11-27 08:05:26.262873] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x229c700) on tqpair(0x223a690): expected_datao=0, payload_size=4096 00:22:32.316 [2024-11-27 08:05:26.262877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262892] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262895] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262932] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.316 [2024-11-27 08:05:26.262938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.316 [2024-11-27 08:05:26.262941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262944] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c700) on tqpair=0x223a690 00:22:32.316 [2024-11-27 08:05:26.262960] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:32.316 [2024-11-27 08:05:26.262972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:32.316 [2024-11-27 08:05:26.262981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:32.316 [2024-11-27 08:05:26.262987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.262990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x223a690) 00:22:32.316 [2024-11-27 08:05:26.262996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.316 [2024-11-27 08:05:26.263007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c700, cid 4, qid 0 00:22:32.316 [2024-11-27 08:05:26.263089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:32.316 [2024-11-27 08:05:26.263095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:32.316 [2024-11-27 08:05:26.263098] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:32.316 [2024-11-27 08:05:26.263101] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x223a690): datao=0, datal=4096, cccid=4 00:22:32.316 [2024-11-27 08:05:26.263105] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x229c700) on tqpair(0x223a690): expected_datao=0, payload_size=4096 00:22:32.317 [2024-11-27 08:05:26.263108] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263119] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263123] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.317 [2024-11-27 08:05:26.263168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.317 [2024-11-27 08:05:26.263171] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263175] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c700) on tqpair=0x223a690 00:22:32.317 [2024-11-27 08:05:26.263184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:32.317 [2024-11-27 08:05:26.263195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:32.317 [2024-11-27 08:05:26.263201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x223a690) 00:22:32.317 [2024-11-27 08:05:26.263210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.317 [2024-11-27 08:05:26.263221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c700, cid 4, qid 0 00:22:32.317 [2024-11-27 08:05:26.263291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:32.317 [2024-11-27 08:05:26.263297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:32.317 [2024-11-27 08:05:26.263300] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263303] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x223a690): datao=0, datal=4096, cccid=4 00:22:32.317 [2024-11-27 08:05:26.263306] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x229c700) on tqpair(0x223a690): expected_datao=0, payload_size=4096 00:22:32.317 [2024-11-27 08:05:26.263310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263321] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263324] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.317 [2024-11-27 08:05:26.263372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.317 [2024-11-27 08:05:26.263375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c700) on tqpair=0x223a690 00:22:32.317 [2024-11-27 08:05:26.263388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:32.317 [2024-11-27 08:05:26.263396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:32.317 [2024-11-27 08:05:26.263404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:32.317 [2024-11-27 08:05:26.263410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:32.317 [2024-11-27 08:05:26.263414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:32.317 [2024-11-27 08:05:26.263419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:32.317 [2024-11-27 08:05:26.263423] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:32.317 [2024-11-27 08:05:26.263428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:32.317 [2024-11-27 08:05:26.263432] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:32.317 [2024-11-27 08:05:26.263445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263448] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x223a690) 00:22:32.317 [2024-11-27 08:05:26.263454] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.317 [2024-11-27 08:05:26.263460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x223a690) 00:22:32.317 [2024-11-27 08:05:26.263473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.317 [2024-11-27 08:05:26.263487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c700, cid 4, qid 0 00:22:32.317 [2024-11-27 08:05:26.263492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c880, cid 5, qid 0 00:22:32.317 [2024-11-27 08:05:26.263575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.317 [2024-11-27 08:05:26.263581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.317 [2024-11-27 08:05:26.263584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c700) on tqpair=0x223a690 00:22:32.317 [2024-11-27 08:05:26.263593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.317 [2024-11-27 08:05:26.263598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.317 [2024-11-27 08:05:26.263601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c880) on tqpair=0x223a690 00:22:32.317 [2024-11-27 08:05:26.263612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x223a690) 00:22:32.317 [2024-11-27 08:05:26.263621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.317 [2024-11-27 08:05:26.263631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c880, cid 5, qid 0 00:22:32.317 [2024-11-27 08:05:26.263701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.317 [2024-11-27 08:05:26.263707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.317 [2024-11-27 08:05:26.263709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c880) on tqpair=0x223a690 00:22:32.317 [2024-11-27 08:05:26.263721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x223a690) 00:22:32.317 [2024-11-27 08:05:26.263730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.317 [2024-11-27 08:05:26.263739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c880, cid 5, qid 0 00:22:32.317 [2024-11-27 08:05:26.263817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.317 [2024-11-27 08:05:26.263823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.317 [2024-11-27 08:05:26.263826] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c880) on tqpair=0x223a690 00:22:32.317 [2024-11-27 08:05:26.263837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x223a690) 00:22:32.317 [2024-11-27 08:05:26.263846] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.317 [2024-11-27 08:05:26.263855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c880, cid 5, qid 0 00:22:32.317 [2024-11-27 08:05:26.263924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.317 [2024-11-27 08:05:26.263930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.317 [2024-11-27 08:05:26.263933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c880) on tqpair=0x223a690 00:22:32.317 [2024-11-27 08:05:26.263961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x223a690) 00:22:32.317 [2024-11-27 08:05:26.263971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.317 [2024-11-27 08:05:26.263978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.317 [2024-11-27 08:05:26.263981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x223a690) 00:22:32.317 [2024-11-27 08:05:26.263986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.317 [2024-11-27 08:05:26.263992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.263995] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x223a690) 00:22:32.318 [2024-11-27 08:05:26.264001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.318 [2024-11-27 08:05:26.264007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x223a690) 00:22:32.318 [2024-11-27 08:05:26.264016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.318 [2024-11-27 08:05:26.264027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c880, cid 5, qid 0 00:22:32.318 [2024-11-27 08:05:26.264032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c700, cid 4, qid 0 00:22:32.318 [2024-11-27 08:05:26.264036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229ca00, cid 6, qid 0 00:22:32.318 [2024-11-27 08:05:26.264040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229cb80, cid 7, qid 0 00:22:32.318 [2024-11-27 08:05:26.264188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:32.318 [2024-11-27 08:05:26.264194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:32.318 [2024-11-27 08:05:26.264197] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264200] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x223a690): datao=0, datal=8192, cccid=5 00:22:32.318 [2024-11-27 08:05:26.264204] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x229c880) on tqpair(0x223a690): expected_datao=0, payload_size=8192 00:22:32.318 [2024-11-27 08:05:26.264208] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264214] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264217] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:32.318 [2024-11-27 08:05:26.264227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:32.318 [2024-11-27 08:05:26.264229] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264232] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x223a690): datao=0, datal=512, cccid=4 00:22:32.318 [2024-11-27 08:05:26.264236] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x229c700) on tqpair(0x223a690): expected_datao=0, payload_size=512 00:22:32.318 [2024-11-27 08:05:26.264240] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264246] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264249] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264253] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:32.318 [2024-11-27 08:05:26.264258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:32.318 [2024-11-27 08:05:26.264263] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264267] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x223a690): datao=0, datal=512, cccid=6 00:22:32.318 [2024-11-27 08:05:26.264271] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x229ca00) on tqpair(0x223a690): expected_datao=0, payload_size=512 00:22:32.318 [2024-11-27 08:05:26.264274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264280] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264283] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264287] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:32.318 [2024-11-27 08:05:26.264292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:32.318 [2024-11-27 08:05:26.264295] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264298] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x223a690): datao=0, datal=4096, cccid=7 00:22:32.318 [2024-11-27 08:05:26.264302] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x229cb80) on tqpair(0x223a690): expected_datao=0, payload_size=4096 00:22:32.318 [2024-11-27 08:05:26.264306] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264316] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264320] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.318 [2024-11-27 08:05:26.264332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.318 [2024-11-27 08:05:26.264335] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c880) on tqpair=0x223a690 00:22:32.318 [2024-11-27 08:05:26.264351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.318 [2024-11-27 08:05:26.264356] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.318 [2024-11-27 08:05:26.264359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c700) on tqpair=0x223a690 00:22:32.318 [2024-11-27 08:05:26.264372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.318 [2024-11-27 08:05:26.264378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.318 [2024-11-27 08:05:26.264381] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229ca00) on tqpair=0x223a690 00:22:32.318 [2024-11-27 08:05:26.264390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.318 [2024-11-27 08:05:26.264395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.318 [2024-11-27 08:05:26.264398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.318 [2024-11-27 08:05:26.264401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229cb80) on tqpair=0x223a690 00:22:32.318 ===================================================== 00:22:32.318 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:32.318 ===================================================== 00:22:32.318 Controller Capabilities/Features 00:22:32.318 ================================ 00:22:32.318 Vendor ID: 8086 00:22:32.318 Subsystem Vendor ID: 8086 00:22:32.318 Serial Number: SPDK00000000000001 00:22:32.318 Model Number: SPDK bdev Controller 00:22:32.318 Firmware Version: 25.01 00:22:32.318 Recommended Arb Burst: 6 00:22:32.318 IEEE OUI Identifier: e4 d2 5c 00:22:32.318 Multi-path I/O 00:22:32.318 May have multiple subsystem ports: Yes 00:22:32.318 May have multiple controllers: Yes 00:22:32.318 Associated with SR-IOV VF: No 00:22:32.318 Max Data Transfer Size: 131072 00:22:32.318 Max Number of Namespaces: 32 00:22:32.318 Max Number of I/O Queues: 127 00:22:32.318 NVMe Specification Version (VS): 1.3 00:22:32.318 NVMe Specification Version (Identify): 1.3 00:22:32.318 Maximum Queue Entries: 128 00:22:32.318 Contiguous Queues Required: Yes 00:22:32.318 Arbitration Mechanisms Supported 00:22:32.318 Weighted Round Robin: Not Supported 00:22:32.318 Vendor Specific: Not Supported 00:22:32.318 Reset Timeout: 15000 ms 00:22:32.318 Doorbell Stride: 4 bytes 00:22:32.318 NVM Subsystem Reset: Not Supported 00:22:32.318 Command Sets Supported 00:22:32.318 NVM Command Set: Supported 00:22:32.318 Boot Partition: Not Supported 00:22:32.318 Memory Page Size Minimum: 4096 bytes 00:22:32.318 Memory Page Size Maximum: 4096 bytes 00:22:32.318 Persistent Memory Region: Not Supported 00:22:32.318 Optional Asynchronous Events Supported 00:22:32.318 Namespace Attribute Notices: Supported 00:22:32.318 Firmware Activation Notices: Not Supported 00:22:32.318 ANA Change Notices: Not Supported 00:22:32.318 PLE Aggregate Log Change Notices: Not Supported 00:22:32.318 LBA Status Info Alert Notices: Not Supported 00:22:32.318 EGE Aggregate Log Change Notices: Not Supported 00:22:32.318 Normal NVM Subsystem Shutdown event: Not Supported 00:22:32.318 Zone Descriptor Change Notices: Not Supported 00:22:32.318 Discovery Log Change Notices: Not Supported 00:22:32.318 Controller Attributes 00:22:32.318 128-bit Host Identifier: Supported 00:22:32.318 Non-Operational Permissive Mode: Not Supported 00:22:32.318 NVM Sets: Not Supported 00:22:32.318 Read Recovery Levels: Not Supported 00:22:32.318 Endurance Groups: Not Supported 00:22:32.318 Predictable Latency Mode: Not Supported 00:22:32.318 Traffic Based Keep ALive: Not Supported 00:22:32.318 Namespace Granularity: Not Supported 00:22:32.319 SQ Associations: Not Supported 00:22:32.319 UUID List: Not Supported 00:22:32.319 Multi-Domain Subsystem: Not Supported 00:22:32.319 Fixed Capacity Management: Not Supported 00:22:32.319 Variable Capacity Management: Not Supported 00:22:32.319 Delete Endurance Group: Not Supported 00:22:32.319 Delete NVM Set: Not Supported 00:22:32.319 Extended LBA Formats Supported: Not Supported 00:22:32.319 Flexible Data Placement Supported: Not Supported 00:22:32.319 00:22:32.319 Controller Memory Buffer Support 00:22:32.319 ================================ 00:22:32.319 Supported: No 00:22:32.319 00:22:32.319 Persistent Memory Region Support 00:22:32.319 ================================ 00:22:32.319 Supported: No 00:22:32.319 00:22:32.319 Admin Command Set Attributes 00:22:32.319 ============================ 00:22:32.319 Security Send/Receive: Not Supported 00:22:32.319 Format NVM: Not Supported 00:22:32.319 Firmware Activate/Download: Not Supported 00:22:32.319 Namespace Management: Not Supported 00:22:32.319 Device Self-Test: Not Supported 00:22:32.319 Directives: Not Supported 00:22:32.319 NVMe-MI: Not Supported 00:22:32.319 Virtualization Management: Not Supported 00:22:32.319 Doorbell Buffer Config: Not Supported 00:22:32.319 Get LBA Status Capability: Not Supported 00:22:32.319 Command & Feature Lockdown Capability: Not Supported 00:22:32.319 Abort Command Limit: 4 00:22:32.319 Async Event Request Limit: 4 00:22:32.319 Number of Firmware Slots: N/A 00:22:32.319 Firmware Slot 1 Read-Only: N/A 00:22:32.319 Firmware Activation Without Reset: N/A 00:22:32.319 Multiple Update Detection Support: N/A 00:22:32.319 Firmware Update Granularity: No Information Provided 00:22:32.319 Per-Namespace SMART Log: No 00:22:32.319 Asymmetric Namespace Access Log Page: Not Supported 00:22:32.319 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:32.319 Command Effects Log Page: Supported 00:22:32.319 Get Log Page Extended Data: Supported 00:22:32.319 Telemetry Log Pages: Not Supported 00:22:32.319 Persistent Event Log Pages: Not Supported 00:22:32.319 Supported Log Pages Log Page: May Support 00:22:32.319 Commands Supported & Effects Log Page: Not Supported 00:22:32.319 Feature Identifiers & Effects Log Page:May Support 00:22:32.319 NVMe-MI Commands & Effects Log Page: May Support 00:22:32.319 Data Area 4 for Telemetry Log: Not Supported 00:22:32.319 Error Log Page Entries Supported: 128 00:22:32.319 Keep Alive: Supported 00:22:32.319 Keep Alive Granularity: 10000 ms 00:22:32.319 00:22:32.319 NVM Command Set Attributes 00:22:32.319 ========================== 00:22:32.319 Submission Queue Entry Size 00:22:32.319 Max: 64 00:22:32.319 Min: 64 00:22:32.319 Completion Queue Entry Size 00:22:32.319 Max: 16 00:22:32.319 Min: 16 00:22:32.319 Number of Namespaces: 32 00:22:32.319 Compare Command: Supported 00:22:32.319 Write Uncorrectable Command: Not Supported 00:22:32.319 Dataset Management Command: Supported 00:22:32.319 Write Zeroes Command: Supported 00:22:32.319 Set Features Save Field: Not Supported 00:22:32.319 Reservations: Supported 00:22:32.319 Timestamp: Not Supported 00:22:32.319 Copy: Supported 00:22:32.319 Volatile Write Cache: Present 00:22:32.319 Atomic Write Unit (Normal): 1 00:22:32.319 Atomic Write Unit (PFail): 1 00:22:32.319 Atomic Compare & Write Unit: 1 00:22:32.319 Fused Compare & Write: Supported 00:22:32.319 Scatter-Gather List 00:22:32.319 SGL Command Set: Supported 00:22:32.319 SGL Keyed: Supported 00:22:32.319 SGL Bit Bucket Descriptor: Not Supported 00:22:32.319 SGL Metadata Pointer: Not Supported 00:22:32.319 Oversized SGL: Not Supported 00:22:32.319 SGL Metadata Address: Not Supported 00:22:32.319 SGL Offset: Supported 00:22:32.319 Transport SGL Data Block: Not Supported 00:22:32.319 Replay Protected Memory Block: Not Supported 00:22:32.319 00:22:32.319 Firmware Slot Information 00:22:32.319 ========================= 00:22:32.319 Active slot: 1 00:22:32.319 Slot 1 Firmware Revision: 25.01 00:22:32.319 00:22:32.319 00:22:32.319 Commands Supported and Effects 00:22:32.319 ============================== 00:22:32.319 Admin Commands 00:22:32.319 -------------- 00:22:32.319 Get Log Page (02h): Supported 00:22:32.319 Identify (06h): Supported 00:22:32.319 Abort (08h): Supported 00:22:32.319 Set Features (09h): Supported 00:22:32.319 Get Features (0Ah): Supported 00:22:32.319 Asynchronous Event Request (0Ch): Supported 00:22:32.319 Keep Alive (18h): Supported 00:22:32.319 I/O Commands 00:22:32.319 ------------ 00:22:32.319 Flush (00h): Supported LBA-Change 00:22:32.319 Write (01h): Supported LBA-Change 00:22:32.319 Read (02h): Supported 00:22:32.319 Compare (05h): Supported 00:22:32.319 Write Zeroes (08h): Supported LBA-Change 00:22:32.319 Dataset Management (09h): Supported LBA-Change 00:22:32.319 Copy (19h): Supported LBA-Change 00:22:32.319 00:22:32.319 Error Log 00:22:32.319 ========= 00:22:32.319 00:22:32.319 Arbitration 00:22:32.319 =========== 00:22:32.319 Arbitration Burst: 1 00:22:32.319 00:22:32.319 Power Management 00:22:32.319 ================ 00:22:32.319 Number of Power States: 1 00:22:32.319 Current Power State: Power State #0 00:22:32.319 Power State #0: 00:22:32.319 Max Power: 0.00 W 00:22:32.319 Non-Operational State: Operational 00:22:32.319 Entry Latency: Not Reported 00:22:32.319 Exit Latency: Not Reported 00:22:32.319 Relative Read Throughput: 0 00:22:32.319 Relative Read Latency: 0 00:22:32.319 Relative Write Throughput: 0 00:22:32.319 Relative Write Latency: 0 00:22:32.319 Idle Power: Not Reported 00:22:32.319 Active Power: Not Reported 00:22:32.319 Non-Operational Permissive Mode: Not Supported 00:22:32.319 00:22:32.319 Health Information 00:22:32.319 ================== 00:22:32.319 Critical Warnings: 00:22:32.319 Available Spare Space: OK 00:22:32.319 Temperature: OK 00:22:32.319 Device Reliability: OK 00:22:32.319 Read Only: No 00:22:32.319 Volatile Memory Backup: OK 00:22:32.319 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:32.319 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:32.319 Available Spare: 0% 00:22:32.319 Available Spare Threshold: 0% 00:22:32.319 Life Percentage Used:[2024-11-27 08:05:26.264486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.319 [2024-11-27 08:05:26.264491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x223a690) 00:22:32.319 [2024-11-27 08:05:26.264497] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.319 [2024-11-27 08:05:26.264508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229cb80, cid 7, qid 0 00:22:32.320 [2024-11-27 08:05:26.264586] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.320 [2024-11-27 08:05:26.264592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.320 [2024-11-27 08:05:26.264595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.320 [2024-11-27 08:05:26.264598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229cb80) on tqpair=0x223a690 00:22:32.320 [2024-11-27 08:05:26.264629] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:32.320 [2024-11-27 08:05:26.264641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c100) on tqpair=0x223a690 00:22:32.320 [2024-11-27 08:05:26.264646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.320 [2024-11-27 08:05:26.264651] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c280) on tqpair=0x223a690 00:22:32.320 [2024-11-27 08:05:26.264655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.320 [2024-11-27 08:05:26.264659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c400) on tqpair=0x223a690 00:22:32.320 [2024-11-27 08:05:26.264663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.320 [2024-11-27 08:05:26.264668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c580) on tqpair=0x223a690 00:22:32.320 [2024-11-27 08:05:26.264672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.320 [2024-11-27 08:05:26.264679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.320 [2024-11-27 08:05:26.264682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.320 [2024-11-27 08:05:26.264686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x223a690) 00:22:32.320 [2024-11-27 08:05:26.264692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.320 [2024-11-27 08:05:26.264702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c580, cid 3, qid 0 00:22:32.320 [2024-11-27 08:05:26.264768] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.320 [2024-11-27 08:05:26.264773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.320 [2024-11-27 08:05:26.264776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.320 [2024-11-27 08:05:26.264780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c580) on tqpair=0x223a690 00:22:32.320 [2024-11-27 08:05:26.264785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.320 [2024-11-27 08:05:26.264789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.320 [2024-11-27 08:05:26.264792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x223a690) 00:22:32.320 [2024-11-27 08:05:26.264798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.320 [2024-11-27 08:05:26.264810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c580, cid 3, qid 0 00:22:32.320 [2024-11-27 08:05:26.264885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.320 [2024-11-27 08:05:26.264891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.320 [2024-11-27 08:05:26.264894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.320 [2024-11-27 08:05:26.264897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c580) on tqpair=0x223a690 00:22:32.320 [2024-11-27 08:05:26.264901] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:32.320 [2024-11-27 08:05:26.264905] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:32.320 [2024-11-27 08:05:26.264913] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.320 [2024-11-27 08:05:26.264917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.320 [2024-11-27 08:05:26.264920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x223a690) 00:22:32.320 [2024-11-27 08:05:26.264926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.320 [2024-11-27 08:05:26.264935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c580, cid 3, qid 0 00:22:32.320 [2024-11-27 08:05:26.268960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.320 [2024-11-27 08:05:26.268969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.320 [2024-11-27 08:05:26.268972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.320 [2024-11-27 08:05:26.268975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c580) on tqpair=0x223a690 00:22:32.320 [2024-11-27 08:05:26.268985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.320 [2024-11-27 08:05:26.268989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.320 [2024-11-27 08:05:26.268992] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x223a690) 00:22:32.320 [2024-11-27 08:05:26.268998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.320 [2024-11-27 08:05:26.269009] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229c580, cid 3, qid 0 00:22:32.320 [2024-11-27 08:05:26.269164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.320 [2024-11-27 08:05:26.269169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.320 [2024-11-27 08:05:26.269172] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.320 [2024-11-27 08:05:26.269176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229c580) on tqpair=0x223a690 00:22:32.320 [2024-11-27 08:05:26.269182] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:22:32.320 0% 00:22:32.320 Data Units Read: 0 00:22:32.320 Data Units Written: 0 00:22:32.320 Host Read Commands: 0 00:22:32.320 Host Write Commands: 0 00:22:32.320 Controller Busy Time: 0 minutes 00:22:32.320 Power Cycles: 0 00:22:32.320 Power On Hours: 0 hours 00:22:32.320 Unsafe Shutdowns: 0 00:22:32.320 Unrecoverable Media Errors: 0 00:22:32.320 Lifetime Error Log Entries: 0 00:22:32.320 Warning Temperature Time: 0 minutes 00:22:32.320 Critical Temperature Time: 0 minutes 00:22:32.320 00:22:32.320 Number of Queues 00:22:32.320 ================ 00:22:32.320 Number of I/O Submission Queues: 127 00:22:32.320 Number of I/O Completion Queues: 127 00:22:32.320 00:22:32.320 Active Namespaces 00:22:32.320 ================= 00:22:32.320 Namespace ID:1 00:22:32.320 Error Recovery Timeout: Unlimited 00:22:32.320 Command Set Identifier: NVM (00h) 00:22:32.320 Deallocate: Supported 00:22:32.320 Deallocated/Unwritten Error: Not Supported 00:22:32.320 Deallocated Read Value: Unknown 00:22:32.320 Deallocate in Write Zeroes: Not Supported 00:22:32.320 Deallocated Guard Field: 0xFFFF 00:22:32.320 Flush: Supported 00:22:32.320 Reservation: Supported 00:22:32.320 Namespace Sharing Capabilities: Multiple Controllers 00:22:32.320 Size (in LBAs): 131072 (0GiB) 00:22:32.320 Capacity (in LBAs): 131072 (0GiB) 00:22:32.320 Utilization (in LBAs): 131072 (0GiB) 00:22:32.320 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:32.320 EUI64: ABCDEF0123456789 00:22:32.320 UUID: 2121dc0b-9847-43b3-9507-43c32205cd1b 00:22:32.320 Thin Provisioning: Not Supported 00:22:32.320 Per-NS Atomic Units: Yes 00:22:32.320 Atomic Boundary Size (Normal): 0 00:22:32.320 Atomic Boundary Size (PFail): 0 00:22:32.320 Atomic Boundary Offset: 0 00:22:32.320 Maximum Single Source Range Length: 65535 00:22:32.320 Maximum Copy Length: 65535 00:22:32.320 Maximum Source Range Count: 1 00:22:32.320 NGUID/EUI64 Never Reused: No 00:22:32.320 Namespace Write Protected: No 00:22:32.320 Number of LBA Formats: 1 00:22:32.320 Current LBA Format: LBA Format #00 00:22:32.320 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:32.320 00:22:32.320 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:32.320 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:32.320 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.320 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.320 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.320 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:32.321 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:32.321 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:32.321 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:32.321 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:32.321 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:32.321 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:32.321 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:32.321 rmmod nvme_tcp 00:22:32.321 rmmod nvme_fabrics 00:22:32.321 rmmod nvme_keyring 00:22:32.321 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:32.321 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:32.321 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:32.321 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2529526 ']' 00:22:32.321 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2529526 00:22:32.321 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2529526 ']' 00:22:32.321 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2529526 00:22:32.321 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:32.321 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.321 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2529526 00:22:32.581 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.581 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.581 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2529526' 00:22:32.581 killing process with pid 2529526 00:22:32.581 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2529526 00:22:32.581 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2529526 00:22:32.581 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:32.581 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:32.581 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:32.581 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:32.581 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:32.581 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:32.581 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:32.581 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:32.581 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:32.581 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.581 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.581 08:05:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:35.116 00:22:35.116 real 0m8.621s 00:22:35.116 user 0m5.004s 00:22:35.116 sys 0m4.387s 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:35.116 ************************************ 00:22:35.116 END TEST nvmf_identify 00:22:35.116 ************************************ 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.116 ************************************ 00:22:35.116 START TEST nvmf_perf 00:22:35.116 ************************************ 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:35.116 * Looking for test storage... 00:22:35.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:35.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.116 --rc genhtml_branch_coverage=1 00:22:35.116 --rc genhtml_function_coverage=1 00:22:35.116 --rc genhtml_legend=1 00:22:35.116 --rc geninfo_all_blocks=1 00:22:35.116 --rc geninfo_unexecuted_blocks=1 00:22:35.116 00:22:35.116 ' 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:35.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.116 --rc genhtml_branch_coverage=1 00:22:35.116 --rc genhtml_function_coverage=1 00:22:35.116 --rc genhtml_legend=1 00:22:35.116 --rc geninfo_all_blocks=1 00:22:35.116 --rc geninfo_unexecuted_blocks=1 00:22:35.116 00:22:35.116 ' 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:35.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.116 --rc genhtml_branch_coverage=1 00:22:35.116 --rc genhtml_function_coverage=1 00:22:35.116 --rc genhtml_legend=1 00:22:35.116 --rc geninfo_all_blocks=1 00:22:35.116 --rc geninfo_unexecuted_blocks=1 00:22:35.116 00:22:35.116 ' 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:35.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.116 --rc genhtml_branch_coverage=1 00:22:35.116 --rc genhtml_function_coverage=1 00:22:35.116 --rc genhtml_legend=1 00:22:35.116 --rc geninfo_all_blocks=1 00:22:35.116 --rc geninfo_unexecuted_blocks=1 00:22:35.116 00:22:35.116 ' 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.116 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:35.117 08:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:40.387 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:40.387 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.387 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:40.388 Found net devices under 0000:86:00.0: cvl_0_0 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:40.388 Found net devices under 0000:86:00.1: cvl_0_1 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.388 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:22:40.647 00:22:40.647 --- 10.0.0.2 ping statistics --- 00:22:40.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.647 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:22:40.647 00:22:40.647 --- 10.0.0.1 ping statistics --- 00:22:40.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.647 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2533082 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2533082 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2533082 ']' 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.647 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:40.647 [2024-11-27 08:05:34.620480] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:22:40.647 [2024-11-27 08:05:34.620526] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.647 [2024-11-27 08:05:34.688412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.648 [2024-11-27 08:05:34.731439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.648 [2024-11-27 08:05:34.731478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.648 [2024-11-27 08:05:34.731485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.648 [2024-11-27 08:05:34.731490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.648 [2024-11-27 08:05:34.731496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.648 [2024-11-27 08:05:34.733101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.648 [2024-11-27 08:05:34.733195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.648 [2024-11-27 08:05:34.733261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.648 [2024-11-27 08:05:34.733262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.906 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.906 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:40.906 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:40.906 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.906 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:40.906 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.906 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:40.906 08:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:44.193 08:05:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:44.193 08:05:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:44.193 08:05:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:44.193 08:05:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:44.452 08:05:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:44.452 08:05:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:44.452 08:05:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:44.452 08:05:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:44.452 08:05:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:44.452 [2024-11-27 08:05:38.492776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.452 08:05:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:44.711 08:05:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:44.711 08:05:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:44.971 08:05:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:44.971 08:05:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:45.230 08:05:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.230 [2024-11-27 08:05:39.311841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.488 08:05:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:45.488 08:05:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:45.488 08:05:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:45.488 08:05:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:45.488 08:05:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:46.866 Initializing NVMe Controllers 00:22:46.866 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:46.866 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:46.866 Initialization complete. Launching workers. 00:22:46.866 ======================================================== 00:22:46.866 Latency(us) 00:22:46.866 Device Information : IOPS MiB/s Average min max 00:22:46.866 PCIE (0000:5e:00.0) NSID 1 from core 0: 96996.27 378.89 329.40 39.89 7205.45 00:22:46.866 ======================================================== 00:22:46.866 Total : 96996.27 378.89 329.40 39.89 7205.45 00:22:46.866 00:22:46.866 08:05:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:48.242 Initializing NVMe Controllers 00:22:48.242 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:48.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:48.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:48.242 Initialization complete. Launching workers. 00:22:48.242 ======================================================== 00:22:48.242 Latency(us) 00:22:48.242 Device Information : IOPS MiB/s Average min max 00:22:48.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 130.00 0.51 7686.39 124.70 45117.03 00:22:48.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16470.89 6012.19 47897.29 00:22:48.242 ======================================================== 00:22:48.242 Total : 191.00 0.75 10491.91 124.70 47897.29 00:22:48.242 00:22:48.242 08:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:49.617 Initializing NVMe Controllers 00:22:49.618 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:49.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:49.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:49.618 Initialization complete. Launching workers. 00:22:49.618 ======================================================== 00:22:49.618 Latency(us) 00:22:49.618 Device Information : IOPS MiB/s Average min max 00:22:49.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10828.00 42.30 2959.40 366.86 6333.68 00:22:49.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3920.00 15.31 8209.28 6702.31 15719.14 00:22:49.618 ======================================================== 00:22:49.618 Total : 14748.00 57.61 4354.81 366.86 15719.14 00:22:49.618 00:22:49.618 08:05:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:49.618 08:05:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:49.618 08:05:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:52.150 Initializing NVMe Controllers 00:22:52.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:52.150 Controller IO queue size 128, less than required. 00:22:52.150 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:52.150 Controller IO queue size 128, less than required. 00:22:52.150 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:52.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:52.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:52.150 Initialization complete. Launching workers. 00:22:52.150 ======================================================== 00:22:52.150 Latency(us) 00:22:52.150 Device Information : IOPS MiB/s Average min max 00:22:52.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1766.31 441.58 73744.16 55068.71 128379.97 00:22:52.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 611.36 152.84 218187.74 88835.49 309865.96 00:22:52.150 ======================================================== 00:22:52.150 Total : 2377.67 594.42 110884.20 55068.71 309865.96 00:22:52.150 00:22:52.150 08:05:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:52.150 No valid NVMe controllers or AIO or URING devices found 00:22:52.150 Initializing NVMe Controllers 00:22:52.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:52.150 Controller IO queue size 128, less than required. 00:22:52.150 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:52.150 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:52.150 Controller IO queue size 128, less than required. 00:22:52.150 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:52.150 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:52.150 WARNING: Some requested NVMe devices were skipped 00:22:52.150 08:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:54.682 Initializing NVMe Controllers 00:22:54.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:54.682 Controller IO queue size 128, less than required. 00:22:54.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:54.682 Controller IO queue size 128, less than required. 00:22:54.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:54.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:54.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:54.682 Initialization complete. Launching workers. 00:22:54.682 00:22:54.682 ==================== 00:22:54.682 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:54.682 TCP transport: 00:22:54.682 polls: 15116 00:22:54.682 idle_polls: 11793 00:22:54.682 sock_completions: 3323 00:22:54.682 nvme_completions: 5885 00:22:54.682 submitted_requests: 8740 00:22:54.682 queued_requests: 1 00:22:54.682 00:22:54.682 ==================== 00:22:54.682 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:54.682 TCP transport: 00:22:54.682 polls: 15469 00:22:54.682 idle_polls: 11357 00:22:54.682 sock_completions: 4112 00:22:54.682 nvme_completions: 6681 00:22:54.682 submitted_requests: 10076 00:22:54.682 queued_requests: 1 00:22:54.682 ======================================================== 00:22:54.682 Latency(us) 00:22:54.682 Device Information : IOPS MiB/s Average min max 00:22:54.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1469.84 367.46 88823.60 57800.87 143356.55 00:22:54.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1668.68 417.17 77883.36 48736.79 127294.03 00:22:54.682 ======================================================== 00:22:54.682 Total : 3138.52 784.63 83006.92 48736.79 143356.55 00:22:54.682 00:22:54.941 08:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:54.941 08:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:54.941 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:54.941 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:54.941 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:54.941 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:54.941 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:54.941 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.941 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:54.941 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.941 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.941 rmmod nvme_tcp 00:22:54.941 rmmod nvme_fabrics 00:22:55.211 rmmod nvme_keyring 00:22:55.211 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:55.211 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:55.211 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:55.211 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2533082 ']' 00:22:55.211 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2533082 00:22:55.211 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2533082 ']' 00:22:55.211 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2533082 00:22:55.211 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:55.211 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:55.211 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2533082 00:22:55.211 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:55.211 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:55.211 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2533082' 00:22:55.211 killing process with pid 2533082 00:22:55.211 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2533082 00:22:55.211 08:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2533082 00:22:56.595 08:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:56.595 08:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:56.595 08:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:56.595 08:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:56.595 08:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:56.595 08:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:56.595 08:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:56.595 08:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:56.595 08:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:56.595 08:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.595 08:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.595 08:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:59.128 00:22:59.128 real 0m23.983s 00:22:59.128 user 1m3.161s 00:22:59.128 sys 0m8.055s 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:59.128 ************************************ 00:22:59.128 END TEST nvmf_perf 00:22:59.128 ************************************ 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.128 ************************************ 00:22:59.128 START TEST nvmf_fio_host 00:22:59.128 ************************************ 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:59.128 * Looking for test storage... 00:22:59.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:59.128 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:59.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.129 --rc genhtml_branch_coverage=1 00:22:59.129 --rc genhtml_function_coverage=1 00:22:59.129 --rc genhtml_legend=1 00:22:59.129 --rc geninfo_all_blocks=1 00:22:59.129 --rc geninfo_unexecuted_blocks=1 00:22:59.129 00:22:59.129 ' 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:59.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.129 --rc genhtml_branch_coverage=1 00:22:59.129 --rc genhtml_function_coverage=1 00:22:59.129 --rc genhtml_legend=1 00:22:59.129 --rc geninfo_all_blocks=1 00:22:59.129 --rc geninfo_unexecuted_blocks=1 00:22:59.129 00:22:59.129 ' 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:59.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.129 --rc genhtml_branch_coverage=1 00:22:59.129 --rc genhtml_function_coverage=1 00:22:59.129 --rc genhtml_legend=1 00:22:59.129 --rc geninfo_all_blocks=1 00:22:59.129 --rc geninfo_unexecuted_blocks=1 00:22:59.129 00:22:59.129 ' 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:59.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.129 --rc genhtml_branch_coverage=1 00:22:59.129 --rc genhtml_function_coverage=1 00:22:59.129 --rc genhtml_legend=1 00:22:59.129 --rc geninfo_all_blocks=1 00:22:59.129 --rc geninfo_unexecuted_blocks=1 00:22:59.129 00:22:59.129 ' 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.129 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:59.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:59.130 08:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:05.692 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:05.692 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.692 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:05.692 Found net devices under 0000:86:00.0: cvl_0_0 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:05.693 Found net devices under 0000:86:00.1: cvl_0_1 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:05.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:23:05.693 00:23:05.693 --- 10.0.0.2 ping statistics --- 00:23:05.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.693 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:05.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:23:05.693 00:23:05.693 --- 10.0.0.1 ping statistics --- 00:23:05.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.693 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2539273 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2539273 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2539273 ']' 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.693 08:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.693 [2024-11-27 08:05:58.972771] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:23:05.693 [2024-11-27 08:05:58.972825] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.693 [2024-11-27 08:05:59.044810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:05.693 [2024-11-27 08:05:59.089230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.693 [2024-11-27 08:05:59.089270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.693 [2024-11-27 08:05:59.089278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.693 [2024-11-27 08:05:59.089285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.693 [2024-11-27 08:05:59.089291] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.693 [2024-11-27 08:05:59.090882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.693 [2024-11-27 08:05:59.090905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.693 [2024-11-27 08:05:59.091003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:05.693 [2024-11-27 08:05:59.091005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.694 08:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.694 08:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:05.694 08:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:05.694 [2024-11-27 08:05:59.358301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.694 08:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:05.694 08:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:05.694 08:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.694 08:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:05.694 Malloc1 00:23:05.694 08:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:05.952 08:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:05.952 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.211 [2024-11-27 08:06:00.233909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.211 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:06.471 08:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:06.729 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:06.729 fio-3.35 00:23:06.729 Starting 1 thread 00:23:09.264 00:23:09.264 test: (groupid=0, jobs=1): err= 0: pid=2539830: Wed Nov 27 08:06:03 2024 00:23:09.264 read: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(90.2MiB/2005msec) 00:23:09.264 slat (nsec): min=1585, max=256299, avg=1729.56, stdev=2330.86 00:23:09.264 clat (usec): min=3085, max=10391, avg=6166.12, stdev=454.00 00:23:09.264 lat (usec): min=3120, max=10393, avg=6167.85, stdev=453.92 00:23:09.264 clat percentiles (usec): 00:23:09.264 | 1.00th=[ 5080], 5.00th=[ 5407], 10.00th=[ 5604], 20.00th=[ 5800], 00:23:09.264 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6259], 00:23:09.264 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:23:09.264 | 99.00th=[ 7177], 99.50th=[ 7242], 99.90th=[ 8848], 99.95th=[ 9765], 00:23:09.264 | 99.99th=[10421] 00:23:09.264 bw ( KiB/s): min=45208, max=46496, per=99.96%, avg=46048.00, stdev=583.76, samples=4 00:23:09.264 iops : min=11302, max=11624, avg=11512.00, stdev=145.94, samples=4 00:23:09.264 write: IOPS=11.4k, BW=44.7MiB/s (46.8MB/s)(89.6MiB/2005msec); 0 zone resets 00:23:09.264 slat (nsec): min=1623, max=228088, avg=1794.89, stdev=1665.71 00:23:09.264 clat (usec): min=2438, max=9618, avg=4950.00, stdev=378.42 00:23:09.264 lat (usec): min=2453, max=9619, avg=4951.80, stdev=378.39 00:23:09.264 clat percentiles (usec): 00:23:09.264 | 1.00th=[ 4080], 5.00th=[ 4359], 10.00th=[ 4490], 20.00th=[ 4621], 00:23:09.264 | 30.00th=[ 4752], 40.00th=[ 4883], 50.00th=[ 4948], 60.00th=[ 5014], 00:23:09.264 | 70.00th=[ 5145], 80.00th=[ 5276], 90.00th=[ 5407], 95.00th=[ 5538], 00:23:09.264 | 99.00th=[ 5800], 99.50th=[ 5932], 99.90th=[ 7701], 99.95th=[ 8979], 00:23:09.264 | 99.99th=[ 9241] 00:23:09.264 bw ( KiB/s): min=45248, max=46336, per=99.98%, avg=45744.00, stdev=462.80, samples=4 00:23:09.264 iops : min=11312, max=11584, avg=11436.00, stdev=115.70, samples=4 00:23:09.264 lat (msec) : 4=0.29%, 10=99.69%, 20=0.02% 00:23:09.264 cpu : usr=71.61%, sys=26.90%, ctx=97, majf=0, minf=2 00:23:09.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:09.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:09.264 issued rwts: total=23090,22933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:09.264 00:23:09.264 Run status group 0 (all jobs): 00:23:09.264 READ: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=90.2MiB (94.6MB), run=2005-2005msec 00:23:09.264 WRITE: bw=44.7MiB/s (46.8MB/s), 44.7MiB/s-44.7MiB/s (46.8MB/s-46.8MB/s), io=89.6MiB (93.9MB), run=2005-2005msec 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:09.264 08:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:09.523 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:09.523 fio-3.35 00:23:09.523 Starting 1 thread 00:23:10.459 [2024-11-27 08:06:04.231333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4b90 is same with the state(6) to be set 00:23:10.459 [2024-11-27 08:06:04.231425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4b90 is same with the state(6) to be set 00:23:10.459 [2024-11-27 08:06:04.231434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4b90 is same with the state(6) to be set 00:23:11.836 00:23:11.836 test: (groupid=0, jobs=1): err= 0: pid=2540464: Wed Nov 27 08:06:05 2024 00:23:11.836 read: IOPS=10.4k, BW=162MiB/s (170MB/s)(326MiB/2005msec) 00:23:11.836 slat (nsec): min=2546, max=87815, avg=2831.10, stdev=1442.80 00:23:11.836 clat (usec): min=1892, max=50228, avg=7368.45, stdev=4609.10 00:23:11.836 lat (usec): min=1894, max=50231, avg=7371.28, stdev=4609.14 00:23:11.836 clat percentiles (usec): 00:23:11.836 | 1.00th=[ 3818], 5.00th=[ 4490], 10.00th=[ 4948], 20.00th=[ 5538], 00:23:11.836 | 30.00th=[ 5997], 40.00th=[ 6456], 50.00th=[ 6980], 60.00th=[ 7373], 00:23:11.836 | 70.00th=[ 7635], 80.00th=[ 8094], 90.00th=[ 8979], 95.00th=[ 9634], 00:23:11.836 | 99.00th=[43779], 99.50th=[47449], 99.90th=[49546], 99.95th=[50070], 00:23:11.836 | 99.99th=[50070] 00:23:11.836 bw ( KiB/s): min=71488, max=96384, per=49.42%, avg=82152.00, stdev=11525.73, samples=4 00:23:11.836 iops : min= 4468, max= 6024, avg=5134.50, stdev=720.36, samples=4 00:23:11.836 write: IOPS=6362, BW=99.4MiB/s (104MB/s)(168MiB/1689msec); 0 zone resets 00:23:11.836 slat (usec): min=30, max=388, avg=31.81, stdev= 7.79 00:23:11.836 clat (usec): min=3986, max=14923, avg=8846.80, stdev=1558.46 00:23:11.836 lat (usec): min=4017, max=15035, avg=8878.61, stdev=1559.98 00:23:11.836 clat percentiles (usec): 00:23:11.836 | 1.00th=[ 5866], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7504], 00:23:11.836 | 30.00th=[ 7898], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8979], 00:23:11.836 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[11076], 95.00th=[11731], 00:23:11.836 | 99.00th=[13173], 99.50th=[13566], 99.90th=[14484], 99.95th=[14615], 00:23:11.836 | 99.99th=[14877] 00:23:11.836 bw ( KiB/s): min=73696, max=100224, per=84.03%, avg=85544.00, stdev=12493.02, samples=4 00:23:11.836 iops : min= 4606, max= 6264, avg=5346.50, stdev=780.81, samples=4 00:23:11.836 lat (msec) : 2=0.01%, 4=1.11%, 10=88.90%, 20=9.18%, 50=0.78% 00:23:11.836 lat (msec) : 100=0.03% 00:23:11.836 cpu : usr=83.53%, sys=15.67%, ctx=47, majf=0, minf=2 00:23:11.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:11.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:11.836 issued rwts: total=20833,10747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.836 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:11.836 00:23:11.836 Run status group 0 (all jobs): 00:23:11.836 READ: bw=162MiB/s (170MB/s), 162MiB/s-162MiB/s (170MB/s-170MB/s), io=326MiB (341MB), run=2005-2005msec 00:23:11.836 WRITE: bw=99.4MiB/s (104MB/s), 99.4MiB/s-99.4MiB/s (104MB/s-104MB/s), io=168MiB (176MB), run=1689-1689msec 00:23:11.837 08:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:12.095 08:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:12.095 08:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:12.095 08:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:12.095 08:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:12.095 08:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:12.095 08:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:12.095 08:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:12.095 08:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:12.095 08:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:12.095 08:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:12.095 rmmod nvme_tcp 00:23:12.095 rmmod nvme_fabrics 00:23:12.095 rmmod nvme_keyring 00:23:12.095 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:12.095 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:12.095 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:12.095 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2539273 ']' 00:23:12.095 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2539273 00:23:12.095 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2539273 ']' 00:23:12.095 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2539273 00:23:12.095 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:12.095 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:12.095 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2539273 00:23:12.095 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:12.095 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:12.095 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2539273' 00:23:12.095 killing process with pid 2539273 00:23:12.095 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2539273 00:23:12.095 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2539273 00:23:12.356 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:12.356 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:12.356 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:12.356 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:12.356 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:12.356 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:12.356 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:12.356 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:12.356 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:12.356 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.356 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.356 08:06:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.262 08:06:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:14.521 00:23:14.521 real 0m15.596s 00:23:14.521 user 0m44.953s 00:23:14.521 sys 0m6.532s 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.521 ************************************ 00:23:14.521 END TEST nvmf_fio_host 00:23:14.521 ************************************ 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.521 ************************************ 00:23:14.521 START TEST nvmf_failover 00:23:14.521 ************************************ 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:14.521 * Looking for test storage... 00:23:14.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:14.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.521 --rc genhtml_branch_coverage=1 00:23:14.521 --rc genhtml_function_coverage=1 00:23:14.521 --rc genhtml_legend=1 00:23:14.521 --rc geninfo_all_blocks=1 00:23:14.521 --rc geninfo_unexecuted_blocks=1 00:23:14.521 00:23:14.521 ' 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:14.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.521 --rc genhtml_branch_coverage=1 00:23:14.521 --rc genhtml_function_coverage=1 00:23:14.521 --rc genhtml_legend=1 00:23:14.521 --rc geninfo_all_blocks=1 00:23:14.521 --rc geninfo_unexecuted_blocks=1 00:23:14.521 00:23:14.521 ' 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:14.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.521 --rc genhtml_branch_coverage=1 00:23:14.521 --rc genhtml_function_coverage=1 00:23:14.521 --rc genhtml_legend=1 00:23:14.521 --rc geninfo_all_blocks=1 00:23:14.521 --rc geninfo_unexecuted_blocks=1 00:23:14.521 00:23:14.521 ' 00:23:14.521 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:14.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.521 --rc genhtml_branch_coverage=1 00:23:14.521 --rc genhtml_function_coverage=1 00:23:14.521 --rc genhtml_legend=1 00:23:14.521 --rc geninfo_all_blocks=1 00:23:14.521 --rc geninfo_unexecuted_blocks=1 00:23:14.521 00:23:14.521 ' 00:23:14.522 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.522 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:14.522 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.522 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.522 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.522 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.522 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.522 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.522 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.522 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.522 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.522 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:14.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:14.781 08:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.047 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:20.048 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:20.048 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:20.048 Found net devices under 0000:86:00.0: cvl_0_0 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:20.048 Found net devices under 0000:86:00.1: cvl_0_1 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:20.048 08:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:20.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:23:20.048 00:23:20.048 --- 10.0.0.2 ping statistics --- 00:23:20.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.048 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:23:20.048 00:23:20.048 --- 10.0.0.1 ping statistics --- 00:23:20.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.048 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2544610 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2544610 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2544610 ']' 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:20.048 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:20.048 [2024-11-27 08:06:14.151070] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:23:20.048 [2024-11-27 08:06:14.151117] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.307 [2024-11-27 08:06:14.218713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:20.307 [2024-11-27 08:06:14.260846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.307 [2024-11-27 08:06:14.260885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.307 [2024-11-27 08:06:14.260893] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.307 [2024-11-27 08:06:14.260899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.307 [2024-11-27 08:06:14.260904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.307 [2024-11-27 08:06:14.262314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.307 [2024-11-27 08:06:14.262402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:20.307 [2024-11-27 08:06:14.262404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.307 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.307 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:20.307 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:20.307 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.307 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:20.307 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.307 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:20.565 [2024-11-27 08:06:14.573098] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.565 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:20.823 Malloc0 00:23:20.823 08:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:21.080 08:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:21.338 08:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:21.338 [2024-11-27 08:06:15.404996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.338 08:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:21.596 [2024-11-27 08:06:15.597535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:21.596 08:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:21.855 [2024-11-27 08:06:15.790159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:21.855 08:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:21.855 08:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2544876 00:23:21.855 08:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:21.855 08:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2544876 /var/tmp/bdevperf.sock 00:23:21.855 08:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2544876 ']' 00:23:21.855 08:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:21.855 08:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.855 08:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:21.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:21.855 08:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.855 08:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:22.114 08:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.114 08:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:22.114 08:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:22.372 NVMe0n1 00:23:22.372 08:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:22.631 00:23:22.631 08:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:22.631 08:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2545096 00:23:22.631 08:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:24.010 08:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.010 [2024-11-27 08:06:17.901198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 [2024-11-27 08:06:17.901432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02d0 is same with the state(6) to be set 00:23:24.010 08:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:27.299 08:06:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:27.299 00:23:27.299 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:27.558 [2024-11-27 08:06:21.568817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf0fa0 is same with the state(6) to be set 00:23:27.558 [2024-11-27 08:06:21.568858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf0fa0 is same with the state(6) to be set 00:23:27.558 [2024-11-27 08:06:21.568866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf0fa0 is same with the state(6) to be set 00:23:27.558 [2024-11-27 08:06:21.568874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf0fa0 is same with the state(6) to be set 00:23:27.558 [2024-11-27 08:06:21.568881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf0fa0 is same with the state(6) to be set 00:23:27.558 [2024-11-27 08:06:21.568887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf0fa0 is same with the state(6) to be set 00:23:27.558 [2024-11-27 08:06:21.568893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf0fa0 is same with the state(6) to be set 00:23:27.558 [2024-11-27 08:06:21.568899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf0fa0 is same with the state(6) to be set 00:23:27.558 [2024-11-27 08:06:21.568906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf0fa0 is same with the state(6) to be set 00:23:27.558 [2024-11-27 08:06:21.568912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf0fa0 is same with the state(6) to be set 00:23:27.558 [2024-11-27 08:06:21.568918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf0fa0 is same with the state(6) to be set 00:23:27.558 [2024-11-27 08:06:21.568923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf0fa0 is same with the state(6) to be set 00:23:27.558 [2024-11-27 08:06:21.568929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf0fa0 is same with the state(6) to be set 00:23:27.558 08:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:30.921 08:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.921 [2024-11-27 08:06:24.788469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.921 08:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:31.858 08:06:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:32.117 [2024-11-27 08:06:26.012387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 [2024-11-27 08:06:26.012777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:23:32.117 08:06:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2545096 00:23:38.696 { 00:23:38.696 "results": [ 00:23:38.696 { 00:23:38.696 "job": "NVMe0n1", 00:23:38.696 "core_mask": "0x1", 00:23:38.696 "workload": "verify", 00:23:38.696 "status": "finished", 00:23:38.696 "verify_range": { 00:23:38.696 "start": 0, 00:23:38.696 "length": 16384 00:23:38.696 }, 00:23:38.696 "queue_depth": 128, 00:23:38.696 "io_size": 4096, 00:23:38.696 "runtime": 15.004995, 00:23:38.696 "iops": 10671.84627519036, 00:23:38.696 "mibps": 41.68689951246235, 00:23:38.696 "io_failed": 6053, 00:23:38.696 "io_timeout": 0, 00:23:38.696 "avg_latency_us": 11534.583961203822, 00:23:38.696 "min_latency_us": 434.5321739130435, 00:23:38.696 "max_latency_us": 31001.377391304348 00:23:38.696 } 00:23:38.696 ], 00:23:38.696 "core_count": 1 00:23:38.696 } 00:23:38.696 08:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2544876 00:23:38.696 08:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2544876 ']' 00:23:38.696 08:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2544876 00:23:38.696 08:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:38.696 08:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.696 08:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2544876 00:23:38.696 08:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:38.696 08:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:38.696 08:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2544876' 00:23:38.696 killing process with pid 2544876 00:23:38.696 08:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2544876 00:23:38.696 08:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2544876 00:23:38.696 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:38.696 [2024-11-27 08:06:15.851457] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:23:38.696 [2024-11-27 08:06:15.851511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544876 ] 00:23:38.696 [2024-11-27 08:06:15.915725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.696 [2024-11-27 08:06:15.957821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.696 Running I/O for 15 seconds... 00:23:38.696 10822.00 IOPS, 42.27 MiB/s [2024-11-27T07:06:32.805Z] [2024-11-27 08:06:17.901980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.696 [2024-11-27 08:06:17.902016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.696 [2024-11-27 08:06:17.902033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.696 [2024-11-27 08:06:17.902041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.696 [2024-11-27 08:06:17.902050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.696 [2024-11-27 08:06:17.902058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.696 [2024-11-27 08:06:17.902066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.696 [2024-11-27 08:06:17.902074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.696 [2024-11-27 08:06:17.902082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.696 [2024-11-27 08:06:17.902089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.696 [2024-11-27 08:06:17.902098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.696 [2024-11-27 08:06:17.902104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.697 [2024-11-27 08:06:17.902548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.697 [2024-11-27 08:06:17.902556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.698 [2024-11-27 08:06:17.902703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.698 [2024-11-27 08:06:17.902718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.902985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.902992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.903001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.903008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.903016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.698 [2024-11-27 08:06:17.903023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.698 [2024-11-27 08:06:17.903031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.699 [2024-11-27 08:06:17.903376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.699 [2024-11-27 08:06:17.903390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.699 [2024-11-27 08:06:17.903404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.699 [2024-11-27 08:06:17.903420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.699 [2024-11-27 08:06:17.903434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.699 [2024-11-27 08:06:17.903449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.699 [2024-11-27 08:06:17.903463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.699 [2024-11-27 08:06:17.903482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.699 [2024-11-27 08:06:17.903497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.699 [2024-11-27 08:06:17.903506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.700 [2024-11-27 08:06:17.903514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.903523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.700 [2024-11-27 08:06:17.903532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.903541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.700 [2024-11-27 08:06:17.903547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.903555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.700 [2024-11-27 08:06:17.903561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.903570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.700 [2024-11-27 08:06:17.903578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.903588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.700 [2024-11-27 08:06:17.903595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.903616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.700 [2024-11-27 08:06:17.903624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95920 len:8 PRP1 0x0 PRP2 0x0 00:23:38.700 [2024-11-27 08:06:17.903632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.903670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.700 [2024-11-27 08:06:17.903680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.903687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.700 [2024-11-27 08:06:17.903694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.903701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.700 [2024-11-27 08:06:17.903708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.903715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.700 [2024-11-27 08:06:17.903723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.903729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a0370 is same with the state(6) to be set 00:23:38.700 [2024-11-27 08:06:17.903907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.700 [2024-11-27 08:06:17.903916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.700 [2024-11-27 08:06:17.903923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95928 len:8 PRP1 0x0 PRP2 0x0 00:23:38.700 [2024-11-27 08:06:17.903929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.903941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.700 [2024-11-27 08:06:17.903954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.700 [2024-11-27 08:06:17.903960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96640 len:8 PRP1 0x0 PRP2 0x0 00:23:38.700 [2024-11-27 08:06:17.903967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.903973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.700 [2024-11-27 08:06:17.903979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.700 [2024-11-27 08:06:17.903985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96648 len:8 PRP1 0x0 PRP2 0x0 00:23:38.700 [2024-11-27 08:06:17.903991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.903998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.700 [2024-11-27 08:06:17.904003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.700 [2024-11-27 08:06:17.904008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96656 len:8 PRP1 0x0 PRP2 0x0 00:23:38.700 [2024-11-27 08:06:17.904015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.904021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.700 [2024-11-27 08:06:17.904026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.700 [2024-11-27 08:06:17.904031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96664 len:8 PRP1 0x0 PRP2 0x0 00:23:38.700 [2024-11-27 08:06:17.904037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.904045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.700 [2024-11-27 08:06:17.904050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.700 [2024-11-27 08:06:17.904055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96672 len:8 PRP1 0x0 PRP2 0x0 00:23:38.700 [2024-11-27 08:06:17.904062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.904068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.700 [2024-11-27 08:06:17.904073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.700 [2024-11-27 08:06:17.904078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96680 len:8 PRP1 0x0 PRP2 0x0 00:23:38.700 [2024-11-27 08:06:17.904084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.904091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.700 [2024-11-27 08:06:17.904096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.700 [2024-11-27 08:06:17.904103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96688 len:8 PRP1 0x0 PRP2 0x0 00:23:38.700 [2024-11-27 08:06:17.904109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.904116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.700 [2024-11-27 08:06:17.904121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.700 [2024-11-27 08:06:17.904126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96696 len:8 PRP1 0x0 PRP2 0x0 00:23:38.700 [2024-11-27 08:06:17.904132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.904142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.700 [2024-11-27 08:06:17.904148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.700 [2024-11-27 08:06:17.904154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96704 len:8 PRP1 0x0 PRP2 0x0 00:23:38.700 [2024-11-27 08:06:17.904160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.904167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.700 [2024-11-27 08:06:17.904173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.700 [2024-11-27 08:06:17.904178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96712 len:8 PRP1 0x0 PRP2 0x0 00:23:38.700 [2024-11-27 08:06:17.904184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.700 [2024-11-27 08:06:17.904190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.700 [2024-11-27 08:06:17.904195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.700 [2024-11-27 08:06:17.904201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96720 len:8 PRP1 0x0 PRP2 0x0 00:23:38.700 [2024-11-27 08:06:17.904207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.701 [2024-11-27 08:06:17.904214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.701 [2024-11-27 08:06:17.904220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.701 [2024-11-27 08:06:17.904225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96728 len:8 PRP1 0x0 PRP2 0x0 00:23:38.701 [2024-11-27 08:06:17.904232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.701 [2024-11-27 08:06:17.904238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.701 [2024-11-27 08:06:17.904243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.701 [2024-11-27 08:06:17.904248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96736 len:8 PRP1 0x0 PRP2 0x0 00:23:38.701 [2024-11-27 08:06:17.904254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.701 [2024-11-27 08:06:17.904261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.701 [2024-11-27 08:06:17.904267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.701 [2024-11-27 08:06:17.904273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96744 len:8 PRP1 0x0 PRP2 0x0 00:23:38.701 [2024-11-27 08:06:17.904279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.701 [2024-11-27 08:06:17.904286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.701 [2024-11-27 08:06:17.904291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.701 [2024-11-27 08:06:17.904296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96752 len:8 PRP1 0x0 PRP2 0x0 00:23:38.701 [2024-11-27 08:06:17.904303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.701 [2024-11-27 08:06:17.904309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.701 [2024-11-27 08:06:17.904314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.701 [2024-11-27 08:06:17.904319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96760 len:8 PRP1 0x0 PRP2 0x0 00:23:38.701 [2024-11-27 08:06:17.904328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.701 [2024-11-27 08:06:17.904336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.701 [2024-11-27 08:06:17.904341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.701 [2024-11-27 08:06:17.915924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96768 len:8 PRP1 0x0 PRP2 0x0 00:23:38.701 [2024-11-27 08:06:17.915936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.701 [2024-11-27 08:06:17.915944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.701 [2024-11-27 08:06:17.915953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.701 [2024-11-27 08:06:17.915959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96776 len:8 PRP1 0x0 PRP2 0x0 00:23:38.701 [2024-11-27 08:06:17.915965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.701 [2024-11-27 08:06:17.915972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.701 [2024-11-27 08:06:17.915977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.701 [2024-11-27 08:06:17.915983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96784 len:8 PRP1 0x0 PRP2 0x0 00:23:38.701 [2024-11-27 08:06:17.915989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.701 [2024-11-27 08:06:17.915998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.701 [2024-11-27 08:06:17.916003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.701 [2024-11-27 08:06:17.916009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96792 len:8 PRP1 0x0 PRP2 0x0 00:23:38.701 [2024-11-27 08:06:17.916015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.701 [2024-11-27 08:06:17.916023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.701 [2024-11-27 08:06:17.916028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.701 [2024-11-27 08:06:17.916034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96800 len:8 PRP1 0x0 PRP2 0x0 00:23:38.701 [2024-11-27 08:06:17.916041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.701 [2024-11-27 08:06:17.916048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.701 [2024-11-27 08:06:17.916052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.701 [2024-11-27 08:06:17.916058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96808 len:8 PRP1 0x0 PRP2 0x0 00:23:38.701 [2024-11-27 08:06:17.916064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.701 [2024-11-27 08:06:17.916071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.701 [2024-11-27 08:06:17.916075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.701 [2024-11-27 08:06:17.916081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95936 len:8 PRP1 0x0 PRP2 0x0 00:23:38.701 [2024-11-27 08:06:17.916087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.701 [2024-11-27 08:06:17.916094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.701 [2024-11-27 08:06:17.916099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.701 [2024-11-27 08:06:17.916107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95944 len:8 PRP1 0x0 PRP2 0x0 00:23:38.701 [2024-11-27 08:06:17.916114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.701 [2024-11-27 08:06:17.916123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.701 [2024-11-27 08:06:17.916128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.701 [2024-11-27 08:06:17.916134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95952 len:8 PRP1 0x0 PRP2 0x0 00:23:38.701 [2024-11-27 08:06:17.916140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.701 [2024-11-27 08:06:17.916146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.701 [2024-11-27 08:06:17.916152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.701 [2024-11-27 08:06:17.916158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95960 len:8 PRP1 0x0 PRP2 0x0 00:23:38.701 [2024-11-27 08:06:17.916165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.701 [2024-11-27 08:06:17.916172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.701 [2024-11-27 08:06:17.916177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.701 [2024-11-27 08:06:17.916183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95968 len:8 PRP1 0x0 PRP2 0x0 00:23:38.701 [2024-11-27 08:06:17.916189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.701 [2024-11-27 08:06:17.916196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.701 [2024-11-27 08:06:17.916201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.701 [2024-11-27 08:06:17.916208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95976 len:8 PRP1 0x0 PRP2 0x0 00:23:38.701 [2024-11-27 08:06:17.916215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.701 [2024-11-27 08:06:17.916222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.701 [2024-11-27 08:06:17.916226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.702 [2024-11-27 08:06:17.916232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95984 len:8 PRP1 0x0 PRP2 0x0 00:23:38.702 [2024-11-27 08:06:17.916239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.702 [2024-11-27 08:06:17.916245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.702 [2024-11-27 08:06:17.916250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.702 [2024-11-27 08:06:17.916255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95992 len:8 PRP1 0x0 PRP2 0x0 00:23:38.702 [2024-11-27 08:06:17.916263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.702 [2024-11-27 08:06:17.916270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.702 [2024-11-27 08:06:17.916276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.702 [2024-11-27 08:06:17.916281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96000 len:8 PRP1 0x0 PRP2 0x0 00:23:38.702 [2024-11-27 08:06:17.916287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.702 [2024-11-27 08:06:17.916294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.702 [2024-11-27 08:06:17.916301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.702 [2024-11-27 08:06:17.916306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96008 len:8 PRP1 0x0 PRP2 0x0 00:23:38.702 [2024-11-27 08:06:17.916328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.702 [2024-11-27 08:06:17.916339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.702 [2024-11-27 08:06:17.916346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.702 [2024-11-27 08:06:17.916354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96016 len:8 PRP1 0x0 PRP2 0x0 00:23:38.702 [2024-11-27 08:06:17.916363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.702 [2024-11-27 08:06:17.916372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.702 [2024-11-27 08:06:17.916378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.702 [2024-11-27 08:06:17.916387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96024 len:8 PRP1 0x0 PRP2 0x0 00:23:38.702 [2024-11-27 08:06:17.916397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.702 [2024-11-27 08:06:17.916406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.702 [2024-11-27 08:06:17.916413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.702 [2024-11-27 08:06:17.916420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96032 len:8 PRP1 0x0 PRP2 0x0 00:23:38.702 [2024-11-27 08:06:17.916429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.702 [2024-11-27 08:06:17.916438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.702 [2024-11-27 08:06:17.916445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.702 [2024-11-27 08:06:17.916453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96040 len:8 PRP1 0x0 PRP2 0x0 00:23:38.702 [2024-11-27 08:06:17.916461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.702 [2024-11-27 08:06:17.916470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.702 [2024-11-27 08:06:17.916477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.702 [2024-11-27 08:06:17.916485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96048 len:8 PRP1 0x0 PRP2 0x0 00:23:38.702 [2024-11-27 08:06:17.916494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.702 [2024-11-27 08:06:17.916503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.702 [2024-11-27 08:06:17.916510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.702 [2024-11-27 08:06:17.916517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96056 len:8 PRP1 0x0 PRP2 0x0 00:23:38.702 [2024-11-27 08:06:17.916526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.702 [2024-11-27 08:06:17.916535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.702 [2024-11-27 08:06:17.916542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.702 [2024-11-27 08:06:17.916550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96064 len:8 PRP1 0x0 PRP2 0x0 00:23:38.702 [2024-11-27 08:06:17.916559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.702 [2024-11-27 08:06:17.916571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.702 [2024-11-27 08:06:17.916578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.702 [2024-11-27 08:06:17.916585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96072 len:8 PRP1 0x0 PRP2 0x0 00:23:38.702 [2024-11-27 08:06:17.916594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.702 [2024-11-27 08:06:17.916603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.702 [2024-11-27 08:06:17.916611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.702 [2024-11-27 08:06:17.916620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96080 len:8 PRP1 0x0 PRP2 0x0 00:23:38.702 [2024-11-27 08:06:17.916629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.702 [2024-11-27 08:06:17.916638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.702 [2024-11-27 08:06:17.916646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.702 [2024-11-27 08:06:17.916654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96088 len:8 PRP1 0x0 PRP2 0x0 00:23:38.702 [2024-11-27 08:06:17.916663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.702 [2024-11-27 08:06:17.916673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.702 [2024-11-27 08:06:17.916680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.702 [2024-11-27 08:06:17.916687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96096 len:8 PRP1 0x0 PRP2 0x0 00:23:38.702 [2024-11-27 08:06:17.916696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.916706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.916714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.916721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96104 len:8 PRP1 0x0 PRP2 0x0 00:23:38.703 [2024-11-27 08:06:17.916730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.916738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.916745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.916752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96112 len:8 PRP1 0x0 PRP2 0x0 00:23:38.703 [2024-11-27 08:06:17.916761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.916771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.916778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.916785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96120 len:8 PRP1 0x0 PRP2 0x0 00:23:38.703 [2024-11-27 08:06:17.916794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.916803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.916810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.916818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96128 len:8 PRP1 0x0 PRP2 0x0 00:23:38.703 [2024-11-27 08:06:17.916831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.916840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.916847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.916855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96136 len:8 PRP1 0x0 PRP2 0x0 00:23:38.703 [2024-11-27 08:06:17.916864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.916874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.916881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.916889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96144 len:8 PRP1 0x0 PRP2 0x0 00:23:38.703 [2024-11-27 08:06:17.916898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.916907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.916913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.916921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96152 len:8 PRP1 0x0 PRP2 0x0 00:23:38.703 [2024-11-27 08:06:17.916930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.916939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.916946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.916959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96160 len:8 PRP1 0x0 PRP2 0x0 00:23:38.703 [2024-11-27 08:06:17.916968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.916977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.916984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.916992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96168 len:8 PRP1 0x0 PRP2 0x0 00:23:38.703 [2024-11-27 08:06:17.917001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.917010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.917017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.917024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96176 len:8 PRP1 0x0 PRP2 0x0 00:23:38.703 [2024-11-27 08:06:17.917033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.917042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.917048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.917056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96184 len:8 PRP1 0x0 PRP2 0x0 00:23:38.703 [2024-11-27 08:06:17.917065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.917074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.917083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.917091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96192 len:8 PRP1 0x0 PRP2 0x0 00:23:38.703 [2024-11-27 08:06:17.917099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.917110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.917119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.917127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96200 len:8 PRP1 0x0 PRP2 0x0 00:23:38.703 [2024-11-27 08:06:17.917136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.917145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.917152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.917161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96208 len:8 PRP1 0x0 PRP2 0x0 00:23:38.703 [2024-11-27 08:06:17.917170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.917179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.917186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.917193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96216 len:8 PRP1 0x0 PRP2 0x0 00:23:38.703 [2024-11-27 08:06:17.917202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.917211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.917218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.917225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96224 len:8 PRP1 0x0 PRP2 0x0 00:23:38.703 [2024-11-27 08:06:17.917234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.917243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.917250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.917257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96232 len:8 PRP1 0x0 PRP2 0x0 00:23:38.703 [2024-11-27 08:06:17.917266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.917276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.917283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.917290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96240 len:8 PRP1 0x0 PRP2 0x0 00:23:38.703 [2024-11-27 08:06:17.917299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.703 [2024-11-27 08:06:17.917308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.703 [2024-11-27 08:06:17.917315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.703 [2024-11-27 08:06:17.917323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96248 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.917333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.917344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.917351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.917358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96256 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.917367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.917377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.917385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.917393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96264 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.917402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.917412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.917420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.917428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96272 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.917438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.917448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.917455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.917462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96280 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.917470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.917479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.917486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.917494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95792 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.917503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.917513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.917520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.917527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95800 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.917537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.917547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.917555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.917563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96288 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.917572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.917581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.917588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.917595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96296 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.917608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.917618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.917625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.917632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96304 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.917641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.917651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.917658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.917666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96312 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.917675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.917684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.917691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.917699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96320 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.917708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.917717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.917725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.917732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96328 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.917741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.917750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.917758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.917767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96336 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.917776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.917785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.917792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.917799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96344 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.917808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.917818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.917825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.917832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96352 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.917841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.917850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.925854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.925870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96360 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.925881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.925891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.925898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.925906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96368 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.925915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.925924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.925932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.925940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96376 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.925955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.925965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.925972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.925980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96384 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.925991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.926001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.704 [2024-11-27 08:06:17.926007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.704 [2024-11-27 08:06:17.926015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96392 len:8 PRP1 0x0 PRP2 0x0 00:23:38.704 [2024-11-27 08:06:17.926024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.704 [2024-11-27 08:06:17.926034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.705 [2024-11-27 08:06:17.926041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.705 [2024-11-27 08:06:17.926048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96400 len:8 PRP1 0x0 PRP2 0x0 00:23:38.705 [2024-11-27 08:06:17.926057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.705 [2024-11-27 08:06:17.926067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.705 [2024-11-27 08:06:17.926074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.705 [2024-11-27 08:06:17.926081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96408 len:8 PRP1 0x0 PRP2 0x0 00:23:38.705 [2024-11-27 08:06:17.926090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.705 [2024-11-27 08:06:17.926100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.705 [2024-11-27 08:06:17.926107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.705 [2024-11-27 08:06:17.926115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96416 len:8 PRP1 0x0 PRP2 0x0 00:23:38.705 [2024-11-27 08:06:17.926124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.705 [2024-11-27 08:06:17.926134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.705 [2024-11-27 08:06:17.926143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.705 [2024-11-27 08:06:17.926151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96424 len:8 PRP1 0x0 PRP2 0x0 00:23:38.705 [2024-11-27 08:06:17.926160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.705 [2024-11-27 08:06:17.926172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.705 [2024-11-27 08:06:17.926179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.705 [2024-11-27 08:06:17.926187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96432 len:8 PRP1 0x0 PRP2 0x0 00:23:38.705 [2024-11-27 08:06:17.926196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.705 [2024-11-27 08:06:17.926206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.705 [2024-11-27 08:06:17.926214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.705 [2024-11-27 08:06:17.926222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96440 len:8 PRP1 0x0 PRP2 0x0 00:23:38.705 [2024-11-27 08:06:17.926232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.705 [2024-11-27 08:06:17.926241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.705 [2024-11-27 08:06:17.926247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.705 [2024-11-27 08:06:17.926256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96448 len:8 PRP1 0x0 PRP2 0x0 00:23:38.705 [2024-11-27 08:06:17.926265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.705 [2024-11-27 08:06:17.926275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.705 [2024-11-27 08:06:17.926282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.705 [2024-11-27 08:06:17.926289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96456 len:8 PRP1 0x0 PRP2 0x0 00:23:38.705 [2024-11-27 08:06:17.926298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.705 [2024-11-27 08:06:17.926307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.705 [2024-11-27 08:06:17.926314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.705 [2024-11-27 08:06:17.926321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96464 len:8 PRP1 0x0 PRP2 0x0 00:23:38.705 [2024-11-27 08:06:17.926332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.705 [2024-11-27 08:06:17.926341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.705 [2024-11-27 08:06:17.926348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.705 [2024-11-27 08:06:17.926368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96472 len:8 PRP1 0x0 PRP2 0x0 00:23:38.705 [2024-11-27 08:06:17.926378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.705 [2024-11-27 08:06:17.926389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.705 [2024-11-27 08:06:17.926397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.705 [2024-11-27 08:06:17.926406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96480 len:8 PRP1 0x0 PRP2 0x0 00:23:38.705 [2024-11-27 08:06:17.926414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.705 [2024-11-27 08:06:17.926426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.705 [2024-11-27 08:06:17.926433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.705 [2024-11-27 08:06:17.926442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96488 len:8 PRP1 0x0 PRP2 0x0 00:23:38.705 [2024-11-27 08:06:17.926452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.705 [2024-11-27 08:06:17.926462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.705 [2024-11-27 08:06:17.926468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.705 [2024-11-27 08:06:17.926477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96496 len:8 PRP1 0x0 PRP2 0x0 00:23:38.705 [2024-11-27 08:06:17.926486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.705 [2024-11-27 08:06:17.926497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.705 [2024-11-27 08:06:17.926504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.705 [2024-11-27 08:06:17.926512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96504 len:8 PRP1 0x0 PRP2 0x0 00:23:38.705 [2024-11-27 08:06:17.926522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.705 [2024-11-27 08:06:17.926532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.705 [2024-11-27 08:06:17.926539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.705 [2024-11-27 08:06:17.926548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96512 len:8 PRP1 0x0 PRP2 0x0 00:23:38.705 [2024-11-27 08:06:17.926557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.705 [2024-11-27 08:06:17.926566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.705 [2024-11-27 08:06:17.926573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.705 [2024-11-27 08:06:17.926582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96520 len:8 PRP1 0x0 PRP2 0x0 00:23:38.705 [2024-11-27 08:06:17.926592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.705 [2024-11-27 08:06:17.926601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.705 [2024-11-27 08:06:17.926608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.706 [2024-11-27 08:06:17.926616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96528 len:8 PRP1 0x0 PRP2 0x0 00:23:38.706 [2024-11-27 08:06:17.926626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.706 [2024-11-27 08:06:17.926636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.706 [2024-11-27 08:06:17.926645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.706 [2024-11-27 08:06:17.926653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96536 len:8 PRP1 0x0 PRP2 0x0 00:23:38.706 [2024-11-27 08:06:17.926662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.706 [2024-11-27 08:06:17.926672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.706 [2024-11-27 08:06:17.926680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.706 [2024-11-27 08:06:17.926688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96544 len:8 PRP1 0x0 PRP2 0x0 00:23:38.706 [2024-11-27 08:06:17.926699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.706 [2024-11-27 08:06:17.926708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.706 [2024-11-27 08:06:17.926716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.706 [2024-11-27 08:06:17.926723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96552 len:8 PRP1 0x0 PRP2 0x0 00:23:38.706 [2024-11-27 08:06:17.926733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.706 [2024-11-27 08:06:17.926743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.706 [2024-11-27 08:06:17.926750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.706 [2024-11-27 08:06:17.926758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96560 len:8 PRP1 0x0 PRP2 0x0 00:23:38.706 [2024-11-27 08:06:17.926767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.706 [2024-11-27 08:06:17.926777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.706 [2024-11-27 08:06:17.926785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.706 [2024-11-27 08:06:17.926793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96568 len:8 PRP1 0x0 PRP2 0x0 00:23:38.706 [2024-11-27 08:06:17.926803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.706 [2024-11-27 08:06:17.926812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.706 [2024-11-27 08:06:17.926820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.706 [2024-11-27 08:06:17.926828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96576 len:8 PRP1 0x0 PRP2 0x0 00:23:38.706 [2024-11-27 08:06:17.926838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.706 [2024-11-27 08:06:17.926848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.706 [2024-11-27 08:06:17.926855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.706 [2024-11-27 08:06:17.926864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96584 len:8 PRP1 0x0 PRP2 0x0 00:23:38.706 [2024-11-27 08:06:17.926873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.706 [2024-11-27 08:06:17.926883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.706 [2024-11-27 08:06:17.926890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.706 [2024-11-27 08:06:17.926898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96592 len:8 PRP1 0x0 PRP2 0x0 00:23:38.706 [2024-11-27 08:06:17.926907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.706 [2024-11-27 08:06:17.926917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.706 [2024-11-27 08:06:17.926925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.706 [2024-11-27 08:06:17.926933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96600 len:8 PRP1 0x0 PRP2 0x0 00:23:38.706 [2024-11-27 08:06:17.926942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.706 [2024-11-27 08:06:17.926956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.706 [2024-11-27 08:06:17.926966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.706 [2024-11-27 08:06:17.926975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96608 len:8 PRP1 0x0 PRP2 0x0 00:23:38.706 [2024-11-27 08:06:17.926984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.706 [2024-11-27 08:06:17.926993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.706 [2024-11-27 08:06:17.927001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.706 [2024-11-27 08:06:17.927009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96616 len:8 PRP1 0x0 PRP2 0x0 00:23:38.706 [2024-11-27 08:06:17.927019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.706 [2024-11-27 08:06:17.927029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.706 [2024-11-27 08:06:17.927037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.706 [2024-11-27 08:06:17.927045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96624 len:8 PRP1 0x0 PRP2 0x0 00:23:38.706 [2024-11-27 08:06:17.927054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.706 [2024-11-27 08:06:17.927064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.706 [2024-11-27 08:06:17.927072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.706 [2024-11-27 08:06:17.927082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96632 len:8 PRP1 0x0 PRP2 0x0 00:23:38.706 [2024-11-27 08:06:17.927091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.706 [2024-11-27 08:06:17.927102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.706 [2024-11-27 08:06:17.927110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.706 [2024-11-27 08:06:17.927118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95808 len:8 PRP1 0x0 PRP2 0x0 00:23:38.706 [2024-11-27 08:06:17.927127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.706 [2024-11-27 08:06:17.927136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.706 [2024-11-27 08:06:17.927144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.706 [2024-11-27 08:06:17.927153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95816 len:8 PRP1 0x0 PRP2 0x0 00:23:38.706 [2024-11-27 08:06:17.927162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.706 [2024-11-27 08:06:17.927172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.706 [2024-11-27 08:06:17.927179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.706 [2024-11-27 08:06:17.927188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95824 len:8 PRP1 0x0 PRP2 0x0 00:23:38.706 [2024-11-27 08:06:17.927197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:17.927208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.707 [2024-11-27 08:06:17.927215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.707 [2024-11-27 08:06:17.927223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95832 len:8 PRP1 0x0 PRP2 0x0 00:23:38.707 [2024-11-27 08:06:17.927232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:17.927244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.707 [2024-11-27 08:06:17.927253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.707 [2024-11-27 08:06:17.927260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95840 len:8 PRP1 0x0 PRP2 0x0 00:23:38.707 [2024-11-27 08:06:17.927269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:17.927279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.707 [2024-11-27 08:06:17.927288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.707 [2024-11-27 08:06:17.927296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95848 len:8 PRP1 0x0 PRP2 0x0 00:23:38.707 [2024-11-27 08:06:17.927305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:17.927315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.707 [2024-11-27 08:06:17.927322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.707 [2024-11-27 08:06:17.927330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95856 len:8 PRP1 0x0 PRP2 0x0 00:23:38.707 [2024-11-27 08:06:17.927340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:17.927350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.707 [2024-11-27 08:06:17.927359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.707 [2024-11-27 08:06:17.927367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95864 len:8 PRP1 0x0 PRP2 0x0 00:23:38.707 [2024-11-27 08:06:17.927376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:17.927386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.707 [2024-11-27 08:06:17.927394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.707 [2024-11-27 08:06:17.927401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95872 len:8 PRP1 0x0 PRP2 0x0 00:23:38.707 [2024-11-27 08:06:17.927411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:17.927421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.707 [2024-11-27 08:06:17.927429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.707 [2024-11-27 08:06:17.927436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95880 len:8 PRP1 0x0 PRP2 0x0 00:23:38.707 [2024-11-27 08:06:17.927447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:17.927457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.707 [2024-11-27 08:06:17.927464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.707 [2024-11-27 08:06:17.927472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95888 len:8 PRP1 0x0 PRP2 0x0 00:23:38.707 [2024-11-27 08:06:17.927481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:17.927491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.707 [2024-11-27 08:06:17.927499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.707 [2024-11-27 08:06:17.927508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95896 len:8 PRP1 0x0 PRP2 0x0 00:23:38.707 [2024-11-27 08:06:17.927520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:17.927530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.707 [2024-11-27 08:06:17.927537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.707 [2024-11-27 08:06:17.927545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95904 len:8 PRP1 0x0 PRP2 0x0 00:23:38.707 [2024-11-27 08:06:17.927555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:17.927565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.707 [2024-11-27 08:06:17.927572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.707 [2024-11-27 08:06:17.927580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95912 len:8 PRP1 0x0 PRP2 0x0 00:23:38.707 [2024-11-27 08:06:17.927590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:17.927600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.707 [2024-11-27 08:06:17.927608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.707 [2024-11-27 08:06:17.927615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95920 len:8 PRP1 0x0 PRP2 0x0 00:23:38.707 [2024-11-27 08:06:17.927626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:17.927678] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:38.707 [2024-11-27 08:06:17.927692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:38.707 [2024-11-27 08:06:17.927749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a0370 (9): Bad file descriptor 00:23:38.707 [2024-11-27 08:06:17.931802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:38.707 [2024-11-27 08:06:17.960542] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:38.707 10433.50 IOPS, 40.76 MiB/s [2024-11-27T07:06:32.816Z] 10563.00 IOPS, 41.26 MiB/s [2024-11-27T07:06:32.816Z] 10593.75 IOPS, 41.38 MiB/s [2024-11-27T07:06:32.816Z] [2024-11-27 08:06:21.571272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.707 [2024-11-27 08:06:21.571310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:21.571325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.707 [2024-11-27 08:06:21.571334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:21.571343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.707 [2024-11-27 08:06:21.571351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:21.571360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.707 [2024-11-27 08:06:21.571367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:21.571378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.707 [2024-11-27 08:06:21.571386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:21.571400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.707 [2024-11-27 08:06:21.571407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:21.571415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.707 [2024-11-27 08:06:21.571423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.707 [2024-11-27 08:06:21.571433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.707 [2024-11-27 08:06:21.571440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.708 [2024-11-27 08:06:21.571794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.708 [2024-11-27 08:06:21.571801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.571809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.571816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.571825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.571831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.571840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.571847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.571855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.571862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.571871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.571878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.571886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.571892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.571901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.571908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.571916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.571923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.571930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.571937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.571945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.571957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.571966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.571973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.571981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.571990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.571998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.572005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.572013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.572021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.572029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.572036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.572045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.572051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.572059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.572067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.572076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.572083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.572091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.572097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.572106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.572112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.572120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.572127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.572135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.572142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.572150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.709 [2024-11-27 08:06:21.572157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.572165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.572172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.572180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.572188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.572197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.572204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.572213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.572219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.709 [2024-11-27 08:06:21.572227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.709 [2024-11-27 08:06:21.572233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.710 [2024-11-27 08:06:21.572248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.710 [2024-11-27 08:06:21.572263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.710 [2024-11-27 08:06:21.572278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.710 [2024-11-27 08:06:21.572292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.710 [2024-11-27 08:06:21.572306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.710 [2024-11-27 08:06:21.572323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.710 [2024-11-27 08:06:21.572350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24048 len:8 PRP1 0x0 PRP2 0x0 00:23:38.710 [2024-11-27 08:06:21.572356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.710 [2024-11-27 08:06:21.572395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.710 [2024-11-27 08:06:21.572411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.710 [2024-11-27 08:06:21.572426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.710 [2024-11-27 08:06:21.572441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a0370 is same with the state(6) to be set 00:23:38.710 [2024-11-27 08:06:21.572576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.710 [2024-11-27 08:06:21.572584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.710 [2024-11-27 08:06:21.572590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24056 len:8 PRP1 0x0 PRP2 0x0 00:23:38.710 [2024-11-27 08:06:21.572597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.710 [2024-11-27 08:06:21.572610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.710 [2024-11-27 08:06:21.572616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:8 PRP1 0x0 PRP2 0x0 00:23:38.710 [2024-11-27 08:06:21.572622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.710 [2024-11-27 08:06:21.572634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.710 [2024-11-27 08:06:21.572641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24072 len:8 PRP1 0x0 PRP2 0x0 00:23:38.710 [2024-11-27 08:06:21.572647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.710 [2024-11-27 08:06:21.572660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.710 [2024-11-27 08:06:21.572665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24080 len:8 PRP1 0x0 PRP2 0x0 00:23:38.710 [2024-11-27 08:06:21.572672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.710 [2024-11-27 08:06:21.572684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.710 [2024-11-27 08:06:21.572689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24088 len:8 PRP1 0x0 PRP2 0x0 00:23:38.710 [2024-11-27 08:06:21.572695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.710 [2024-11-27 08:06:21.572709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.710 [2024-11-27 08:06:21.572715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:8 PRP1 0x0 PRP2 0x0 00:23:38.710 [2024-11-27 08:06:21.572721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.710 [2024-11-27 08:06:21.572735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.710 [2024-11-27 08:06:21.572740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24104 len:8 PRP1 0x0 PRP2 0x0 00:23:38.710 [2024-11-27 08:06:21.572747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.710 [2024-11-27 08:06:21.572760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.710 [2024-11-27 08:06:21.572766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24112 len:8 PRP1 0x0 PRP2 0x0 00:23:38.710 [2024-11-27 08:06:21.572772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.710 [2024-11-27 08:06:21.572784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.710 [2024-11-27 08:06:21.572789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24120 len:8 PRP1 0x0 PRP2 0x0 00:23:38.710 [2024-11-27 08:06:21.572797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.710 [2024-11-27 08:06:21.572803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.710 [2024-11-27 08:06:21.572808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.710 [2024-11-27 08:06:21.572814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:8 PRP1 0x0 PRP2 0x0 00:23:38.711 [2024-11-27 08:06:21.572822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.711 [2024-11-27 08:06:21.572828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.711 [2024-11-27 08:06:21.572833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.711 [2024-11-27 08:06:21.572839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24136 len:8 PRP1 0x0 PRP2 0x0 00:23:38.711 [2024-11-27 08:06:21.572845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.711 [2024-11-27 08:06:21.572852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.711 [2024-11-27 08:06:21.572857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.711 [2024-11-27 08:06:21.572862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24144 len:8 PRP1 0x0 PRP2 0x0 00:23:38.711 [2024-11-27 08:06:21.572869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.711 [2024-11-27 08:06:21.572876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.711 [2024-11-27 08:06:21.572882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.711 [2024-11-27 08:06:21.572887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24152 len:8 PRP1 0x0 PRP2 0x0 00:23:38.711 [2024-11-27 08:06:21.572894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.711 [2024-11-27 08:06:21.572900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.711 [2024-11-27 08:06:21.572905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.711 [2024-11-27 08:06:21.572911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:8 PRP1 0x0 PRP2 0x0 00:23:38.711 [2024-11-27 08:06:21.572919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.711 [2024-11-27 08:06:21.572926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.711 [2024-11-27 08:06:21.572932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.711 [2024-11-27 08:06:21.572938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24168 len:8 PRP1 0x0 PRP2 0x0 00:23:38.711 [2024-11-27 08:06:21.572944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.711 [2024-11-27 08:06:21.572959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.711 [2024-11-27 08:06:21.572964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.711 [2024-11-27 08:06:21.572970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24176 len:8 PRP1 0x0 PRP2 0x0 00:23:38.711 [2024-11-27 08:06:21.572976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.711 [2024-11-27 08:06:21.572983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.711 [2024-11-27 08:06:21.572989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.711 [2024-11-27 08:06:21.572995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24184 len:8 PRP1 0x0 PRP2 0x0 00:23:38.711 [2024-11-27 08:06:21.573001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.711 [2024-11-27 08:06:21.573008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.711 [2024-11-27 08:06:21.573013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.711 [2024-11-27 08:06:21.573018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:8 PRP1 0x0 PRP2 0x0 00:23:38.711 [2024-11-27 08:06:21.573025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.711 [2024-11-27 08:06:21.573032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.711 [2024-11-27 08:06:21.573037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.711 [2024-11-27 08:06:21.573043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24200 len:8 PRP1 0x0 PRP2 0x0 00:23:38.711 [2024-11-27 08:06:21.573051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.711 [2024-11-27 08:06:21.573057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.711 [2024-11-27 08:06:21.573063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.711 [2024-11-27 08:06:21.573068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24208 len:8 PRP1 0x0 PRP2 0x0 00:23:38.711 [2024-11-27 08:06:21.573075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.711 [2024-11-27 08:06:21.573082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.711 [2024-11-27 08:06:21.573088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.711 [2024-11-27 08:06:21.573093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24216 len:8 PRP1 0x0 PRP2 0x0 00:23:38.711 [2024-11-27 08:06:21.573100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.711 [2024-11-27 08:06:21.573107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.711 [2024-11-27 08:06:21.573112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.711 [2024-11-27 08:06:21.573119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:8 PRP1 0x0 PRP2 0x0 00:23:38.711 [2024-11-27 08:06:21.573126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.711 [2024-11-27 08:06:21.573132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.711 [2024-11-27 08:06:21.573137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.711 [2024-11-27 08:06:21.573142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24232 len:8 PRP1 0x0 PRP2 0x0 00:23:38.711 [2024-11-27 08:06:21.573148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.711 [2024-11-27 08:06:21.573155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.711 [2024-11-27 08:06:21.573160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.711 [2024-11-27 08:06:21.573166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24240 len:8 PRP1 0x0 PRP2 0x0 00:23:38.711 [2024-11-27 08:06:21.573173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.711 [2024-11-27 08:06:21.573180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.711 [2024-11-27 08:06:21.573185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.711 [2024-11-27 08:06:21.573190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24248 len:8 PRP1 0x0 PRP2 0x0 00:23:38.711 [2024-11-27 08:06:21.573196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.711 [2024-11-27 08:06:21.573203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.711 [2024-11-27 08:06:21.573208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.711 [2024-11-27 08:06:21.573214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:8 PRP1 0x0 PRP2 0x0 00:23:38.711 [2024-11-27 08:06:21.573221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.711 [2024-11-27 08:06:21.573228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.711 [2024-11-27 08:06:21.573234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.711 [2024-11-27 08:06:21.573239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24264 len:8 PRP1 0x0 PRP2 0x0 00:23:38.711 [2024-11-27 08:06:21.573246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.712 [2024-11-27 08:06:21.573252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.712 [2024-11-27 08:06:21.573257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.712 [2024-11-27 08:06:21.573262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24272 len:8 PRP1 0x0 PRP2 0x0 00:23:38.712 [2024-11-27 08:06:21.573270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.712 [2024-11-27 08:06:21.573278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.712 [2024-11-27 08:06:21.573285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.712 [2024-11-27 08:06:21.573291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24280 len:8 PRP1 0x0 PRP2 0x0 00:23:38.712 [2024-11-27 08:06:21.573297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.712 [2024-11-27 08:06:21.573304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.712 [2024-11-27 08:06:21.573310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.712 [2024-11-27 08:06:21.573315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:8 PRP1 0x0 PRP2 0x0 00:23:38.712 [2024-11-27 08:06:21.573322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.712 [2024-11-27 08:06:21.573329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.712 [2024-11-27 08:06:21.573334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.712 [2024-11-27 08:06:21.573339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24296 len:8 PRP1 0x0 PRP2 0x0 00:23:38.712 [2024-11-27 08:06:21.573346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.712 [2024-11-27 08:06:21.573353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.712 [2024-11-27 08:06:21.573357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.712 [2024-11-27 08:06:21.573363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24304 len:8 PRP1 0x0 PRP2 0x0 00:23:38.712 [2024-11-27 08:06:21.573369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.712 [2024-11-27 08:06:21.573375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.712 [2024-11-27 08:06:21.573381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.712 [2024-11-27 08:06:21.573387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24312 len:8 PRP1 0x0 PRP2 0x0 00:23:38.712 [2024-11-27 08:06:21.573393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.712 [2024-11-27 08:06:21.573400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.712 [2024-11-27 08:06:21.573405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.712 [2024-11-27 08:06:21.573411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:8 PRP1 0x0 PRP2 0x0 00:23:38.712 [2024-11-27 08:06:21.573417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.712 [2024-11-27 08:06:21.573423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.712 [2024-11-27 08:06:21.573428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.712 [2024-11-27 08:06:21.573433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24328 len:8 PRP1 0x0 PRP2 0x0 00:23:38.712 [2024-11-27 08:06:21.573440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.712 [2024-11-27 08:06:21.573447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.712 [2024-11-27 08:06:21.573453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.712 [2024-11-27 08:06:21.573458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24336 len:8 PRP1 0x0 PRP2 0x0 00:23:38.712 [2024-11-27 08:06:21.573464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.712 [2024-11-27 08:06:21.573470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.712 [2024-11-27 08:06:21.573476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.712 [2024-11-27 08:06:21.573482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24344 len:8 PRP1 0x0 PRP2 0x0 00:23:38.712 [2024-11-27 08:06:21.573488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.712 [2024-11-27 08:06:21.573497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.712 [2024-11-27 08:06:21.573503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.712 [2024-11-27 08:06:21.573508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:8 PRP1 0x0 PRP2 0x0 00:23:38.712 [2024-11-27 08:06:21.573515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.712 [2024-11-27 08:06:21.573522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.712 [2024-11-27 08:06:21.573527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.712 [2024-11-27 08:06:21.573532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24360 len:8 PRP1 0x0 PRP2 0x0 00:23:38.712 [2024-11-27 08:06:21.573539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.712 [2024-11-27 08:06:21.573545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.712 [2024-11-27 08:06:21.573551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.712 [2024-11-27 08:06:21.573557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24368 len:8 PRP1 0x0 PRP2 0x0 00:23:38.712 [2024-11-27 08:06:21.573563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.712 [2024-11-27 08:06:21.573570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.712 [2024-11-27 08:06:21.573574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.712 [2024-11-27 08:06:21.573580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24376 len:8 PRP1 0x0 PRP2 0x0 00:23:38.712 [2024-11-27 08:06:21.573586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.712 [2024-11-27 08:06:21.573592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.712 [2024-11-27 08:06:21.573597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.712 [2024-11-27 08:06:21.573602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:8 PRP1 0x0 PRP2 0x0 00:23:38.712 [2024-11-27 08:06:21.573610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.712 [2024-11-27 08:06:21.573617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.712 [2024-11-27 08:06:21.573621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.713 [2024-11-27 08:06:21.573627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24392 len:8 PRP1 0x0 PRP2 0x0 00:23:38.713 [2024-11-27 08:06:21.573633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.713 [2024-11-27 08:06:21.573640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.713 [2024-11-27 08:06:21.584087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.713 [2024-11-27 08:06:21.584097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24400 len:8 PRP1 0x0 PRP2 0x0 00:23:38.713 [2024-11-27 08:06:21.584104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.713 [2024-11-27 08:06:21.584112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.713 [2024-11-27 08:06:21.584120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.713 [2024-11-27 08:06:21.584126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23408 len:8 PRP1 0x0 PRP2 0x0 00:23:38.713 [2024-11-27 08:06:21.584134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.713 [2024-11-27 08:06:21.584141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.713 [2024-11-27 08:06:21.584147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.713 [2024-11-27 08:06:21.584153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23416 len:8 PRP1 0x0 PRP2 0x0 00:23:38.713 [2024-11-27 08:06:21.584160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.713 [2024-11-27 08:06:21.584167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.713 [2024-11-27 08:06:21.584172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.713 [2024-11-27 08:06:21.584178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23424 len:8 PRP1 0x0 PRP2 0x0 00:23:38.713 [2024-11-27 08:06:21.584184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.713 [2024-11-27 08:06:21.584192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.713 [2024-11-27 08:06:21.584197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.713 [2024-11-27 08:06:21.584203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23432 len:8 PRP1 0x0 PRP2 0x0 00:23:38.713 [2024-11-27 08:06:21.584209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.713 [2024-11-27 08:06:21.584216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.713 [2024-11-27 08:06:21.584221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.713 [2024-11-27 08:06:21.584226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23440 len:8 PRP1 0x0 PRP2 0x0 00:23:38.713 [2024-11-27 08:06:21.584233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.713 [2024-11-27 08:06:21.584240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.713 [2024-11-27 08:06:21.584245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.713 [2024-11-27 08:06:21.584250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23448 len:8 PRP1 0x0 PRP2 0x0 00:23:38.713 [2024-11-27 08:06:21.584256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.713 [2024-11-27 08:06:21.584263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.713 [2024-11-27 08:06:21.584268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.713 [2024-11-27 08:06:21.584274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23456 len:8 PRP1 0x0 PRP2 0x0 00:23:38.713 [2024-11-27 08:06:21.584280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.713 [2024-11-27 08:06:21.584287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.713 [2024-11-27 08:06:21.584292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.713 [2024-11-27 08:06:21.584297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24408 len:8 PRP1 0x0 PRP2 0x0 00:23:38.713 [2024-11-27 08:06:21.584303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.713 [2024-11-27 08:06:21.584312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.713 [2024-11-27 08:06:21.584323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.713 [2024-11-27 08:06:21.584329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:8 PRP1 0x0 PRP2 0x0 00:23:38.713 [2024-11-27 08:06:21.584336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.713 [2024-11-27 08:06:21.584342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.713 [2024-11-27 08:06:21.584348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.713 [2024-11-27 08:06:21.584354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23464 len:8 PRP1 0x0 PRP2 0x0 00:23:38.713 [2024-11-27 08:06:21.584361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.713 [2024-11-27 08:06:21.584368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.713 [2024-11-27 08:06:21.584373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.713 [2024-11-27 08:06:21.584378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23472 len:8 PRP1 0x0 PRP2 0x0 00:23:38.713 [2024-11-27 08:06:21.584385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.713 [2024-11-27 08:06:21.584393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.713 [2024-11-27 08:06:21.584398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.713 [2024-11-27 08:06:21.584403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23480 len:8 PRP1 0x0 PRP2 0x0 00:23:38.713 [2024-11-27 08:06:21.584410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.713 [2024-11-27 08:06:21.584416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.713 [2024-11-27 08:06:21.584421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.713 [2024-11-27 08:06:21.584426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:8 PRP1 0x0 PRP2 0x0 00:23:38.713 [2024-11-27 08:06:21.584433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.713 [2024-11-27 08:06:21.584440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.713 [2024-11-27 08:06:21.584445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.713 [2024-11-27 08:06:21.584450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23496 len:8 PRP1 0x0 PRP2 0x0 00:23:38.713 [2024-11-27 08:06:21.584456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.713 [2024-11-27 08:06:21.584462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.713 [2024-11-27 08:06:21.584468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.713 [2024-11-27 08:06:21.584474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23504 len:8 PRP1 0x0 PRP2 0x0 00:23:38.713 [2024-11-27 08:06:21.584480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.713 [2024-11-27 08:06:21.584487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.713 [2024-11-27 08:06:21.584492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.713 [2024-11-27 08:06:21.584498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23512 len:8 PRP1 0x0 PRP2 0x0 00:23:38.713 [2024-11-27 08:06:21.584505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.713 [2024-11-27 08:06:21.584513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.714 [2024-11-27 08:06:21.584519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.714 [2024-11-27 08:06:21.584525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23520 len:8 PRP1 0x0 PRP2 0x0 00:23:38.714 [2024-11-27 08:06:21.584531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.714 [2024-11-27 08:06:21.584538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.714 [2024-11-27 08:06:21.584543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.714 [2024-11-27 08:06:21.584548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23528 len:8 PRP1 0x0 PRP2 0x0 00:23:38.714 [2024-11-27 08:06:21.584555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.714 [2024-11-27 08:06:21.584562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.714 [2024-11-27 08:06:21.584567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.714 [2024-11-27 08:06:21.584573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23536 len:8 PRP1 0x0 PRP2 0x0 00:23:38.714 [2024-11-27 08:06:21.584579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.714 [2024-11-27 08:06:21.584586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.714 [2024-11-27 08:06:21.584592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.714 [2024-11-27 08:06:21.584597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23544 len:8 PRP1 0x0 PRP2 0x0 00:23:38.714 [2024-11-27 08:06:21.584603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.714 [2024-11-27 08:06:21.584610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.714 [2024-11-27 08:06:21.584615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.714 [2024-11-27 08:06:21.584620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:8 PRP1 0x0 PRP2 0x0 00:23:38.714 [2024-11-27 08:06:21.584627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.714 [2024-11-27 08:06:21.584634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.714 [2024-11-27 08:06:21.584639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.714 [2024-11-27 08:06:21.584644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23560 len:8 PRP1 0x0 PRP2 0x0 00:23:38.714 [2024-11-27 08:06:21.584650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.714 [2024-11-27 08:06:21.584657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.714 [2024-11-27 08:06:21.584663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.714 [2024-11-27 08:06:21.584669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23568 len:8 PRP1 0x0 PRP2 0x0 00:23:38.714 [2024-11-27 08:06:21.584675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.714 [2024-11-27 08:06:21.584683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.714 [2024-11-27 08:06:21.584687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.714 [2024-11-27 08:06:21.584693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23576 len:8 PRP1 0x0 PRP2 0x0 00:23:38.714 [2024-11-27 08:06:21.584703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.714 [2024-11-27 08:06:21.584710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.714 [2024-11-27 08:06:21.584716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.714 [2024-11-27 08:06:21.584722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:8 PRP1 0x0 PRP2 0x0 00:23:38.714 [2024-11-27 08:06:21.584728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.714 [2024-11-27 08:06:21.584735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.714 [2024-11-27 08:06:21.584740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.714 [2024-11-27 08:06:21.584746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23592 len:8 PRP1 0x0 PRP2 0x0 00:23:38.714 [2024-11-27 08:06:21.584753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.714 [2024-11-27 08:06:21.584760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.714 [2024-11-27 08:06:21.584765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.714 [2024-11-27 08:06:21.584770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23600 len:8 PRP1 0x0 PRP2 0x0 00:23:38.714 [2024-11-27 08:06:21.584777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.714 [2024-11-27 08:06:21.584783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.714 [2024-11-27 08:06:21.584788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.714 [2024-11-27 08:06:21.584794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23608 len:8 PRP1 0x0 PRP2 0x0 00:23:38.714 [2024-11-27 08:06:21.584801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.714 [2024-11-27 08:06:21.584808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.714 [2024-11-27 08:06:21.584813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.714 [2024-11-27 08:06:21.584818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:8 PRP1 0x0 PRP2 0x0 00:23:38.714 [2024-11-27 08:06:21.584825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.714 [2024-11-27 08:06:21.584832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.714 [2024-11-27 08:06:21.584838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.714 [2024-11-27 08:06:21.584844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23624 len:8 PRP1 0x0 PRP2 0x0 00:23:38.714 [2024-11-27 08:06:21.584850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.714 [2024-11-27 08:06:21.584857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.714 [2024-11-27 08:06:21.584863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.714 [2024-11-27 08:06:21.584869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23632 len:8 PRP1 0x0 PRP2 0x0 00:23:38.714 [2024-11-27 08:06:21.584875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.714 [2024-11-27 08:06:21.584882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.714 [2024-11-27 08:06:21.584886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.714 [2024-11-27 08:06:21.584895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23640 len:8 PRP1 0x0 PRP2 0x0 00:23:38.714 [2024-11-27 08:06:21.584903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.714 [2024-11-27 08:06:21.584910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.584915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.584921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.584928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.584934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.584940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.584945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23656 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.584958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.584965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.584970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.584975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23664 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.584981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.584989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.584994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.585000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23672 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.585007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.585014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.585019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.585024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.585031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.585039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.585044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.585050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23688 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.585055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.585062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.585068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.585074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23696 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.585081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.585088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.585094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.585100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23704 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.585107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.585115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.585120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.585125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.585132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.585138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.585143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.585149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23720 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.585156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.585163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.585169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.585174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23728 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.585180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.585187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.585193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.585199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23736 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.585205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.585212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.585217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.585222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.585228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.585235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.585240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.585245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23752 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.585252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.585258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.585263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.585268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23760 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.585275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.585283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.585288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.585294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23768 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.585301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.585307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.585313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.585319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.585326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.585332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.585337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.585342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23784 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.585349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.585357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.585362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.585368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23792 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.585374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.585381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.715 [2024-11-27 08:06:21.585386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.715 [2024-11-27 08:06:21.585391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23800 len:8 PRP1 0x0 PRP2 0x0 00:23:38.715 [2024-11-27 08:06:21.585398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.715 [2024-11-27 08:06:21.585405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.585410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.585415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.585422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.585428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.585434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.585439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23816 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.585446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.585454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.585459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.585465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23824 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.585474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.585481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.585486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.585492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23832 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.585498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.585505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.585511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.585516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.585523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.585530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.585535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.585540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23848 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.585546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.585553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.585558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.585563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23856 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.585570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.585576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.585581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.585587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23864 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.585594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.585600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.585605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.585611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.585617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.593104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.593116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.593126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23880 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.593135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.593146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.593153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.593164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23888 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.593173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.593182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.593189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.593199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23896 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.593209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.593219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.593226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.593234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.593242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.593251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.593259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.593267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23912 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.593276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.593285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.593292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.593300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23920 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.593309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.593319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.593326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.593333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23928 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.593343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.593352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.593359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.593367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.593376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.593385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.593392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.593399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23944 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.593408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.593420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.593427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.593435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23952 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.593444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.593453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.593461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.716 [2024-11-27 08:06:21.593468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23400 len:8 PRP1 0x0 PRP2 0x0 00:23:38.716 [2024-11-27 08:06:21.593478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.716 [2024-11-27 08:06:21.593487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.716 [2024-11-27 08:06:21.593494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.717 [2024-11-27 08:06:21.593501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23960 len:8 PRP1 0x0 PRP2 0x0 00:23:38.717 [2024-11-27 08:06:21.593511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:21.593521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.717 [2024-11-27 08:06:21.593529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.717 [2024-11-27 08:06:21.593537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:8 PRP1 0x0 PRP2 0x0 00:23:38.717 [2024-11-27 08:06:21.593545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:21.593554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.717 [2024-11-27 08:06:21.593562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.717 [2024-11-27 08:06:21.593569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23976 len:8 PRP1 0x0 PRP2 0x0 00:23:38.717 [2024-11-27 08:06:21.593578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:21.593589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.717 [2024-11-27 08:06:21.593596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.717 [2024-11-27 08:06:21.593603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23984 len:8 PRP1 0x0 PRP2 0x0 00:23:38.717 [2024-11-27 08:06:21.593612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:21.593622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.717 [2024-11-27 08:06:21.593629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.717 [2024-11-27 08:06:21.593636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23992 len:8 PRP1 0x0 PRP2 0x0 00:23:38.717 [2024-11-27 08:06:21.593645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:21.593655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.717 [2024-11-27 08:06:21.593661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.717 [2024-11-27 08:06:21.593669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:8 PRP1 0x0 PRP2 0x0 00:23:38.717 [2024-11-27 08:06:21.593680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:21.593689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.717 [2024-11-27 08:06:21.593697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.717 [2024-11-27 08:06:21.593704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24008 len:8 PRP1 0x0 PRP2 0x0 00:23:38.717 [2024-11-27 08:06:21.593713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:21.593723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.717 [2024-11-27 08:06:21.593730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.717 [2024-11-27 08:06:21.593737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24016 len:8 PRP1 0x0 PRP2 0x0 00:23:38.717 [2024-11-27 08:06:21.593747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:21.593756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.717 [2024-11-27 08:06:21.593764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.717 [2024-11-27 08:06:21.593771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24024 len:8 PRP1 0x0 PRP2 0x0 00:23:38.717 [2024-11-27 08:06:21.593780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:21.593789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.717 [2024-11-27 08:06:21.593796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.717 [2024-11-27 08:06:21.593805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:8 PRP1 0x0 PRP2 0x0 00:23:38.717 [2024-11-27 08:06:21.593813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:21.593823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.717 [2024-11-27 08:06:21.593830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.717 [2024-11-27 08:06:21.593838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24040 len:8 PRP1 0x0 PRP2 0x0 00:23:38.717 [2024-11-27 08:06:21.593847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:21.593857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.717 [2024-11-27 08:06:21.593864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.717 [2024-11-27 08:06:21.593871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24048 len:8 PRP1 0x0 PRP2 0x0 00:23:38.717 [2024-11-27 08:06:21.593880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:21.593930] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:38.717 [2024-11-27 08:06:21.593943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:38.717 [2024-11-27 08:06:21.593991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a0370 (9): Bad file descriptor 00:23:38.717 [2024-11-27 08:06:21.597929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:38.717 [2024-11-27 08:06:21.665627] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:38.717 10418.00 IOPS, 40.70 MiB/s [2024-11-27T07:06:32.826Z] 10519.50 IOPS, 41.09 MiB/s [2024-11-27T07:06:32.826Z] 10559.00 IOPS, 41.25 MiB/s [2024-11-27T07:06:32.826Z] 10580.88 IOPS, 41.33 MiB/s [2024-11-27T07:06:32.826Z] 10595.89 IOPS, 41.39 MiB/s [2024-11-27T07:06:32.826Z] [2024-11-27 08:06:26.013197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.717 [2024-11-27 08:06:26.013234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:26.013249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.717 [2024-11-27 08:06:26.013258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:26.013267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.717 [2024-11-27 08:06:26.013274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:26.013283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.717 [2024-11-27 08:06:26.013290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:26.013299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.717 [2024-11-27 08:06:26.013306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:26.013314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.717 [2024-11-27 08:06:26.013321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:26.013329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.717 [2024-11-27 08:06:26.013336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:26.013344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.717 [2024-11-27 08:06:26.013351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.717 [2024-11-27 08:06:26.013359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.717 [2024-11-27 08:06:26.013367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.718 [2024-11-27 08:06:26.013827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.718 [2024-11-27 08:06:26.013835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.719 [2024-11-27 08:06:26.013842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.013850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.719 [2024-11-27 08:06:26.013857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.013865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.719 [2024-11-27 08:06:26.013872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.013880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.719 [2024-11-27 08:06:26.013886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.013895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.719 [2024-11-27 08:06:26.013902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.013910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.719 [2024-11-27 08:06:26.013917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.013925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.719 [2024-11-27 08:06:26.013931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.013939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.719 [2024-11-27 08:06:26.013946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.013959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.013967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.013975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.013982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.013990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.013997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.014006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.014013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.014021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.014028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.014036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.014043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.014051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.014057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.014065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.014072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.014080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.014087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.014095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.014102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.014109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.014115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.014123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.014130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.014138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.014144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.014152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.014159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.014166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.014173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.014181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.014190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.014198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.014205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.014214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.014220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.014229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.719 [2024-11-27 08:06:26.014236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.719 [2024-11-27 08:06:26.014244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.720 [2024-11-27 08:06:26.014664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.720 [2024-11-27 08:06:26.014701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.720 [2024-11-27 08:06:26.014709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31944 len:8 PRP1 0x0 PRP2 0x0 00:23:38.720 [2024-11-27 08:06:26.014715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.014729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.014734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.014740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31952 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.014746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.014753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.014758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.014764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31960 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.014770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.014777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.014782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.014787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31968 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.014795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.014801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.014807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.014813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31976 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.014820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.014827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.014832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.014837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31984 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.014843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.014850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.014854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.014860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31992 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.014867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.014873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.014878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.014883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32000 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.014889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.014896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.014901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.014906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32008 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.014913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.014921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.014926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.014931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32016 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.014937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.014944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.014954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.014959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32024 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.014965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.014972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.014977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.014984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32032 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.014990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.014997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.015002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.015007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32040 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.015014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.015020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.015025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.015031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32048 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.015037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.015044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.015049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.015054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32056 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.015060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.015066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.015071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.015076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32064 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.015084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.015091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.015096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.015102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32072 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.015108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.015117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.015122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.015127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32080 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.015133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.015141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.015146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.015152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32088 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.015158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.015167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.721 [2024-11-27 08:06:26.015172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.721 [2024-11-27 08:06:26.015177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32096 len:8 PRP1 0x0 PRP2 0x0 00:23:38.721 [2024-11-27 08:06:26.015183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.721 [2024-11-27 08:06:26.015190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.722 [2024-11-27 08:06:26.015195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.722 [2024-11-27 08:06:26.015200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32104 len:8 PRP1 0x0 PRP2 0x0 00:23:38.722 [2024-11-27 08:06:26.015207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.722 [2024-11-27 08:06:26.015213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.722 [2024-11-27 08:06:26.015219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.722 [2024-11-27 08:06:26.015224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32112 len:8 PRP1 0x0 PRP2 0x0 00:23:38.722 [2024-11-27 08:06:26.015230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.722 [2024-11-27 08:06:26.015236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.722 [2024-11-27 08:06:26.015241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.722 [2024-11-27 08:06:26.026484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32120 len:8 PRP1 0x0 PRP2 0x0 00:23:38.722 [2024-11-27 08:06:26.026499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.722 [2024-11-27 08:06:26.026510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.722 [2024-11-27 08:06:26.026517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.722 [2024-11-27 08:06:26.026524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32128 len:8 PRP1 0x0 PRP2 0x0 00:23:38.722 [2024-11-27 08:06:26.026533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.722 [2024-11-27 08:06:26.026542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.722 [2024-11-27 08:06:26.026549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.722 [2024-11-27 08:06:26.026556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32136 len:8 PRP1 0x0 PRP2 0x0 00:23:38.722 [2024-11-27 08:06:26.026565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.722 [2024-11-27 08:06:26.026576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.722 [2024-11-27 08:06:26.026583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.722 [2024-11-27 08:06:26.026590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32144 len:8 PRP1 0x0 PRP2 0x0 00:23:38.722 [2024-11-27 08:06:26.026599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.722 [2024-11-27 08:06:26.026608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.722 [2024-11-27 08:06:26.026616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.722 [2024-11-27 08:06:26.026622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32152 len:8 PRP1 0x0 PRP2 0x0 00:23:38.722 [2024-11-27 08:06:26.026635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.722 [2024-11-27 08:06:26.026646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.722 [2024-11-27 08:06:26.026653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.722 [2024-11-27 08:06:26.026661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32160 len:8 PRP1 0x0 PRP2 0x0 00:23:38.722 [2024-11-27 08:06:26.026670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.722 [2024-11-27 08:06:26.026679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.722 [2024-11-27 08:06:26.026686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.722 [2024-11-27 08:06:26.026694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32168 len:8 PRP1 0x0 PRP2 0x0 00:23:38.722 [2024-11-27 08:06:26.026702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.722 [2024-11-27 08:06:26.026712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.722 [2024-11-27 08:06:26.026718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.722 [2024-11-27 08:06:26.026725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32176 len:8 PRP1 0x0 PRP2 0x0 00:23:38.722 [2024-11-27 08:06:26.026735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.722 [2024-11-27 08:06:26.026744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.722 [2024-11-27 08:06:26.026750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.722 [2024-11-27 08:06:26.026757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32184 len:8 PRP1 0x0 PRP2 0x0 00:23:38.722 [2024-11-27 08:06:26.026766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.722 [2024-11-27 08:06:26.026776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.722 [2024-11-27 08:06:26.026782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.722 [2024-11-27 08:06:26.026790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32192 len:8 PRP1 0x0 PRP2 0x0 00:23:38.722 [2024-11-27 08:06:26.026798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.722 [2024-11-27 08:06:26.026807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:38.722 [2024-11-27 08:06:26.026815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:38.722 [2024-11-27 08:06:26.026823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31552 len:8 PRP1 0x0 PRP2 0x0 00:23:38.722 [2024-11-27 08:06:26.026833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.722 [2024-11-27 08:06:26.026884] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:38.722 [2024-11-27 08:06:26.026914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.722 [2024-11-27 08:06:26.026925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.722 [2024-11-27 08:06:26.026935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.722 [2024-11-27 08:06:26.026945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.722 [2024-11-27 08:06:26.026964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.722 [2024-11-27 08:06:26.026974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.722 [2024-11-27 08:06:26.026984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.722 [2024-11-27 08:06:26.026993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.722 [2024-11-27 08:06:26.027002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:38.722 [2024-11-27 08:06:26.027042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a0370 (9): Bad file descriptor 00:23:38.722 [2024-11-27 08:06:26.030939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:38.722 [2024-11-27 08:06:26.059289] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:38.722 10573.90 IOPS, 41.30 MiB/s [2024-11-27T07:06:32.831Z] 10610.91 IOPS, 41.45 MiB/s [2024-11-27T07:06:32.831Z] 10625.08 IOPS, 41.50 MiB/s [2024-11-27T07:06:32.831Z] 10650.38 IOPS, 41.60 MiB/s [2024-11-27T07:06:32.831Z] 10665.07 IOPS, 41.66 MiB/s 00:23:38.722 Latency(us) 00:23:38.722 [2024-11-27T07:06:32.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.722 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:38.722 Verification LBA range: start 0x0 length 0x4000 00:23:38.722 NVMe0n1 : 15.00 10671.85 41.69 403.40 0.00 11534.58 434.53 31001.38 00:23:38.722 [2024-11-27T07:06:32.831Z] =================================================================================================================== 00:23:38.722 [2024-11-27T07:06:32.831Z] Total : 10671.85 41.69 403.40 0.00 11534.58 434.53 31001.38 00:23:38.722 Received shutdown signal, test time was about 15.000000 seconds 00:23:38.722 00:23:38.722 Latency(us) 00:23:38.722 [2024-11-27T07:06:32.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.722 [2024-11-27T07:06:32.831Z] =================================================================================================================== 00:23:38.722 [2024-11-27T07:06:32.831Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:38.723 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:38.723 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:38.723 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:38.723 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2547626 00:23:38.723 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:38.723 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2547626 /var/tmp/bdevperf.sock 00:23:38.723 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2547626 ']' 00:23:38.723 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.723 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.723 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.723 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.723 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:38.723 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.723 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:38.723 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:38.723 [2024-11-27 08:06:32.493461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:38.723 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:38.723 [2024-11-27 08:06:32.681997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:38.723 08:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:39.290 NVMe0n1 00:23:39.290 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:39.548 00:23:39.548 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:39.809 00:23:39.809 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:39.809 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:39.809 08:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:40.069 08:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:43.355 08:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:43.355 08:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:43.355 08:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2548443 00:23:43.355 08:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:43.355 08:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2548443 00:23:44.734 { 00:23:44.734 "results": [ 00:23:44.734 { 00:23:44.734 "job": "NVMe0n1", 00:23:44.734 "core_mask": "0x1", 00:23:44.734 "workload": "verify", 00:23:44.734 "status": "finished", 00:23:44.734 "verify_range": { 00:23:44.734 "start": 0, 00:23:44.734 "length": 16384 00:23:44.734 }, 00:23:44.734 "queue_depth": 128, 00:23:44.734 "io_size": 4096, 00:23:44.734 "runtime": 1.007443, 00:23:44.734 "iops": 10829.396799620425, 00:23:44.734 "mibps": 42.302331248517284, 00:23:44.734 "io_failed": 0, 00:23:44.734 "io_timeout": 0, 00:23:44.734 "avg_latency_us": 11773.155832144423, 00:23:44.734 "min_latency_us": 765.7739130434783, 00:23:44.734 "max_latency_us": 10086.845217391305 00:23:44.734 } 00:23:44.734 ], 00:23:44.734 "core_count": 1 00:23:44.734 } 00:23:44.734 08:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:44.734 [2024-11-27 08:06:32.130522] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:23:44.734 [2024-11-27 08:06:32.130572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2547626 ] 00:23:44.734 [2024-11-27 08:06:32.193030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.734 [2024-11-27 08:06:32.231269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.735 [2024-11-27 08:06:34.071680] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:44.735 [2024-11-27 08:06:34.071730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.735 [2024-11-27 08:06:34.071742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.735 [2024-11-27 08:06:34.071752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.735 [2024-11-27 08:06:34.071759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.735 [2024-11-27 08:06:34.071767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.735 [2024-11-27 08:06:34.071774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.735 [2024-11-27 08:06:34.071782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.735 [2024-11-27 08:06:34.071788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.735 [2024-11-27 08:06:34.071795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:44.735 [2024-11-27 08:06:34.071822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:44.735 [2024-11-27 08:06:34.071837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b0370 (9): Bad file descriptor 00:23:44.735 [2024-11-27 08:06:34.123115] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:44.735 Running I/O for 1 seconds... 00:23:44.735 10775.00 IOPS, 42.09 MiB/s 00:23:44.735 Latency(us) 00:23:44.735 [2024-11-27T07:06:38.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.735 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:44.735 Verification LBA range: start 0x0 length 0x4000 00:23:44.735 NVMe0n1 : 1.01 10829.40 42.30 0.00 0.00 11773.16 765.77 10086.85 00:23:44.735 [2024-11-27T07:06:38.844Z] =================================================================================================================== 00:23:44.735 [2024-11-27T07:06:38.844Z] Total : 10829.40 42.30 0.00 0.00 11773.16 765.77 10086.85 00:23:44.735 08:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:44.735 08:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:44.735 08:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:44.993 08:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:44.993 08:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:44.993 08:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:45.269 08:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:48.555 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:48.555 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:48.555 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2547626 00:23:48.555 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2547626 ']' 00:23:48.555 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2547626 00:23:48.555 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:48.555 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.555 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2547626 00:23:48.555 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:48.555 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:48.555 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2547626' 00:23:48.555 killing process with pid 2547626 00:23:48.555 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2547626 00:23:48.555 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2547626 00:23:48.814 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:48.814 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:48.814 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:48.814 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:48.814 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:48.814 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:48.814 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:48.814 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:48.814 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:48.814 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:48.814 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:48.814 rmmod nvme_tcp 00:23:49.073 rmmod nvme_fabrics 00:23:49.073 rmmod nvme_keyring 00:23:49.073 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:49.073 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:49.073 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:49.073 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2544610 ']' 00:23:49.073 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2544610 00:23:49.073 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2544610 ']' 00:23:49.073 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2544610 00:23:49.073 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:49.073 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.073 08:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2544610 00:23:49.073 08:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:49.073 08:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:49.073 08:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2544610' 00:23:49.073 killing process with pid 2544610 00:23:49.073 08:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2544610 00:23:49.073 08:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2544610 00:23:49.331 08:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:49.331 08:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:49.331 08:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:49.331 08:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:49.331 08:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:49.331 08:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:49.331 08:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:49.332 08:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:49.332 08:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:49.332 08:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.332 08:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.332 08:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.235 08:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:51.235 00:23:51.235 real 0m36.844s 00:23:51.235 user 1m57.961s 00:23:51.235 sys 0m7.631s 00:23:51.235 08:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:51.235 08:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:51.235 ************************************ 00:23:51.235 END TEST nvmf_failover 00:23:51.235 ************************************ 00:23:51.235 08:06:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:51.235 08:06:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:51.235 08:06:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:51.235 08:06:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.495 ************************************ 00:23:51.495 START TEST nvmf_host_discovery 00:23:51.495 ************************************ 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:51.495 * Looking for test storage... 00:23:51.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:51.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.495 --rc genhtml_branch_coverage=1 00:23:51.495 --rc genhtml_function_coverage=1 00:23:51.495 --rc genhtml_legend=1 00:23:51.495 --rc geninfo_all_blocks=1 00:23:51.495 --rc geninfo_unexecuted_blocks=1 00:23:51.495 00:23:51.495 ' 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:51.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.495 --rc genhtml_branch_coverage=1 00:23:51.495 --rc genhtml_function_coverage=1 00:23:51.495 --rc genhtml_legend=1 00:23:51.495 --rc geninfo_all_blocks=1 00:23:51.495 --rc geninfo_unexecuted_blocks=1 00:23:51.495 00:23:51.495 ' 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:51.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.495 --rc genhtml_branch_coverage=1 00:23:51.495 --rc genhtml_function_coverage=1 00:23:51.495 --rc genhtml_legend=1 00:23:51.495 --rc geninfo_all_blocks=1 00:23:51.495 --rc geninfo_unexecuted_blocks=1 00:23:51.495 00:23:51.495 ' 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:51.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.495 --rc genhtml_branch_coverage=1 00:23:51.495 --rc genhtml_function_coverage=1 00:23:51.495 --rc genhtml_legend=1 00:23:51.495 --rc geninfo_all_blocks=1 00:23:51.495 --rc geninfo_unexecuted_blocks=1 00:23:51.495 00:23:51.495 ' 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.495 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:51.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:51.496 08:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:56.770 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:56.770 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:56.770 Found net devices under 0000:86:00.0: cvl_0_0 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:56.770 Found net devices under 0000:86:00.1: cvl_0_1 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:56.770 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:56.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:23:56.771 00:23:56.771 --- 10.0.0.2 ping statistics --- 00:23:56.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.771 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:23:56.771 00:23:56.771 --- 10.0.0.1 ping statistics --- 00:23:56.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.771 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:56.771 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:57.030 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:57.030 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:57.030 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:57.030 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.030 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2552776 00:23:57.030 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:57.030 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2552776 00:23:57.030 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2552776 ']' 00:23:57.030 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.030 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.030 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.030 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.030 08:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.030 [2024-11-27 08:06:50.976683] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:23:57.030 [2024-11-27 08:06:50.976730] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.030 [2024-11-27 08:06:51.041435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.030 [2024-11-27 08:06:51.082338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.030 [2024-11-27 08:06:51.082376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.030 [2024-11-27 08:06:51.082383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.030 [2024-11-27 08:06:51.082390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.030 [2024-11-27 08:06:51.082395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.030 [2024-11-27 08:06:51.082959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.290 [2024-11-27 08:06:51.214759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.290 [2024-11-27 08:06:51.226935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.290 null0 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.290 null1 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2552801 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2552801 /tmp/host.sock 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2552801 ']' 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:57.290 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.290 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.290 [2024-11-27 08:06:51.301903] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:23:57.290 [2024-11-27 08:06:51.301944] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552801 ] 00:23:57.290 [2024-11-27 08:06:51.365848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.549 [2024-11-27 08:06:51.410138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:57.549 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.812 [2024-11-27 08:06:51.828461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.812 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.074 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:58.074 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:58.074 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:58.074 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:58.074 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:58.074 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:58.074 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:58.074 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:58.074 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:58.074 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:58.074 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.074 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.074 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:58.074 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.074 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.075 08:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.075 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:58.075 08:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:58.643 [2024-11-27 08:06:52.573101] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:58.643 [2024-11-27 08:06:52.573131] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:58.643 [2024-11-27 08:06:52.573146] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:58.643 [2024-11-27 08:06:52.659400] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:58.902 [2024-11-27 08:06:52.834433] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:58.902 [2024-11-27 08:06:52.835111] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x586e30:1 started. 00:23:58.902 [2024-11-27 08:06:52.836503] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:58.902 [2024-11-27 08:06:52.836519] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:58.902 [2024-11-27 08:06:52.841793] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x586e30 was disconnected and freed. delete nvme_qpair. 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:59.161 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:59.162 [2024-11-27 08:06:53.246961] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x5872f0:1 started. 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.162 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:59.162 [2024-11-27 08:06:53.252680] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x5872f0 was disconnected and freed. delete nvme_qpair. 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.421 [2024-11-27 08:06:53.348580] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:59.421 [2024-11-27 08:06:53.348784] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:59.421 [2024-11-27 08:06:53.348806] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:59.421 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.422 [2024-11-27 08:06:53.436054] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:59.422 08:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:59.681 [2024-11-27 08:06:53.537869] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:59.681 [2024-11-27 08:06:53.537904] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:59.681 [2024-11-27 08:06:53.537912] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:59.681 [2024-11-27 08:06:53.537917] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.619 [2024-11-27 08:06:54.604960] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:00.619 [2024-11-27 08:06:54.604983] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:00.619 [2024-11-27 08:06:54.608930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.619 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:00.619 [2024-11-27 08:06:54.608953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.619 [2024-11-27 08:06:54.608963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.619 [2024-11-27 08:06:54.608970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.619 [2024-11-27 08:06:54.608978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.619 [2024-11-27 08:06:54.608984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.619 [2024-11-27 08:06:54.608992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.619 [2024-11-27 08:06:54.609003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.619 [2024-11-27 08:06:54.609010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x557390 is same with the state(6) to be set 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:00.620 [2024-11-27 08:06:54.618942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x557390 (9): Bad file descriptor 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.620 [2024-11-27 08:06:54.628976] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:00.620 [2024-11-27 08:06:54.628988] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:00.620 [2024-11-27 08:06:54.628993] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:00.620 [2024-11-27 08:06:54.628998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:00.620 [2024-11-27 08:06:54.629016] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:00.620 [2024-11-27 08:06:54.629294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.620 [2024-11-27 08:06:54.629310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x557390 with addr=10.0.0.2, port=4420 00:24:00.620 [2024-11-27 08:06:54.629318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x557390 is same with the state(6) to be set 00:24:00.620 [2024-11-27 08:06:54.629331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x557390 (9): Bad file descriptor 00:24:00.620 [2024-11-27 08:06:54.629349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:00.620 [2024-11-27 08:06:54.629357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:00.620 [2024-11-27 08:06:54.629366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:00.620 [2024-11-27 08:06:54.629372] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:00.620 [2024-11-27 08:06:54.629377] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:00.620 [2024-11-27 08:06:54.629381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:00.620 [2024-11-27 08:06:54.639047] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:00.620 [2024-11-27 08:06:54.639057] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:00.620 [2024-11-27 08:06:54.639062] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:00.620 [2024-11-27 08:06:54.639066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:00.620 [2024-11-27 08:06:54.639083] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:00.620 [2024-11-27 08:06:54.639217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.620 [2024-11-27 08:06:54.639229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x557390 with addr=10.0.0.2, port=4420 00:24:00.620 [2024-11-27 08:06:54.639236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x557390 is same with the state(6) to be set 00:24:00.620 [2024-11-27 08:06:54.639247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x557390 (9): Bad file descriptor 00:24:00.620 [2024-11-27 08:06:54.639257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:00.620 [2024-11-27 08:06:54.639263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:00.620 [2024-11-27 08:06:54.639270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:00.620 [2024-11-27 08:06:54.639277] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:00.620 [2024-11-27 08:06:54.639281] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:00.620 [2024-11-27 08:06:54.639285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:00.620 [2024-11-27 08:06:54.649115] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:00.620 [2024-11-27 08:06:54.649128] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:00.620 [2024-11-27 08:06:54.649132] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:00.620 [2024-11-27 08:06:54.649137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:00.620 [2024-11-27 08:06:54.649162] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:00.620 [2024-11-27 08:06:54.649393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.620 [2024-11-27 08:06:54.649407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x557390 with addr=10.0.0.2, port=4420 00:24:00.620 [2024-11-27 08:06:54.649415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x557390 is same with the state(6) to be set 00:24:00.620 [2024-11-27 08:06:54.649426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x557390 (9): Bad file descriptor 00:24:00.620 [2024-11-27 08:06:54.649455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:00.620 [2024-11-27 08:06:54.649463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:00.620 [2024-11-27 08:06:54.649470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:00.620 [2024-11-27 08:06:54.649477] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:00.620 [2024-11-27 08:06:54.649481] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:00.620 [2024-11-27 08:06:54.649485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:00.620 [2024-11-27 08:06:54.659192] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:00.620 [2024-11-27 08:06:54.659205] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:00.620 [2024-11-27 08:06:54.659209] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:00.620 [2024-11-27 08:06:54.659213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:00.620 [2024-11-27 08:06:54.659227] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:00.620 [2024-11-27 08:06:54.659427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.620 [2024-11-27 08:06:54.659440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x557390 with addr=10.0.0.2, port=4420 00:24:00.620 [2024-11-27 08:06:54.659448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x557390 is same with the state(6) to be set 00:24:00.620 [2024-11-27 08:06:54.659459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x557390 (9): Bad file descriptor 00:24:00.620 [2024-11-27 08:06:54.659476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:00.620 [2024-11-27 08:06:54.659484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:00.620 [2024-11-27 08:06:54.659492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:00.620 [2024-11-27 08:06:54.659498] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:00.620 [2024-11-27 08:06:54.659502] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:00.620 [2024-11-27 08:06:54.659506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.620 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:00.620 [2024-11-27 08:06:54.669259] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:00.620 [2024-11-27 08:06:54.669274] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:00.620 [2024-11-27 08:06:54.669278] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:00.620 [2024-11-27 08:06:54.669283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:00.620 [2024-11-27 08:06:54.669298] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:00.620 [2024-11-27 08:06:54.669404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.620 [2024-11-27 08:06:54.669418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x557390 with addr=10.0.0.2, port=4420 00:24:00.620 [2024-11-27 08:06:54.669431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x557390 is same with the state(6) to be set 00:24:00.620 [2024-11-27 08:06:54.669442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x557390 (9): Bad file descriptor 00:24:00.621 [2024-11-27 08:06:54.669452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:00.621 [2024-11-27 08:06:54.669459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:00.621 [2024-11-27 08:06:54.669466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:00.621 [2024-11-27 08:06:54.669472] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:00.621 [2024-11-27 08:06:54.669476] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:00.621 [2024-11-27 08:06:54.669480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:00.621 [2024-11-27 08:06:54.679330] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:00.621 [2024-11-27 08:06:54.679341] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:00.621 [2024-11-27 08:06:54.679345] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:00.621 [2024-11-27 08:06:54.679349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:00.621 [2024-11-27 08:06:54.679363] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:00.621 [2024-11-27 08:06:54.679586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.621 [2024-11-27 08:06:54.679599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x557390 with addr=10.0.0.2, port=4420 00:24:00.621 [2024-11-27 08:06:54.679607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x557390 is same with the state(6) to be set 00:24:00.621 [2024-11-27 08:06:54.679618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x557390 (9): Bad file descriptor 00:24:00.621 [2024-11-27 08:06:54.679635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:00.621 [2024-11-27 08:06:54.679643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:00.621 [2024-11-27 08:06:54.679650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:00.621 [2024-11-27 08:06:54.679656] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:00.621 [2024-11-27 08:06:54.679660] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:00.621 [2024-11-27 08:06:54.679664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:00.621 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.621 [2024-11-27 08:06:54.689395] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:00.621 [2024-11-27 08:06:54.689406] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:00.621 [2024-11-27 08:06:54.689410] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:00.621 [2024-11-27 08:06:54.689414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:00.621 [2024-11-27 08:06:54.689429] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:00.621 [2024-11-27 08:06:54.689532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.621 [2024-11-27 08:06:54.689545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x557390 with addr=10.0.0.2, port=4420 00:24:00.621 [2024-11-27 08:06:54.689553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x557390 is same with the state(6) to be set 00:24:00.621 [2024-11-27 08:06:54.689563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x557390 (9): Bad file descriptor 00:24:00.621 [2024-11-27 08:06:54.689574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:00.621 [2024-11-27 08:06:54.689581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:00.621 [2024-11-27 08:06:54.689588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:00.621 [2024-11-27 08:06:54.689594] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:00.621 [2024-11-27 08:06:54.689598] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:00.621 [2024-11-27 08:06:54.689602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:00.621 [2024-11-27 08:06:54.699461] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:00.621 [2024-11-27 08:06:54.699474] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:00.621 [2024-11-27 08:06:54.699478] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:00.621 [2024-11-27 08:06:54.699482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:00.621 [2024-11-27 08:06:54.699497] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:00.621 [2024-11-27 08:06:54.699782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.621 [2024-11-27 08:06:54.699817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x557390 with addr=10.0.0.2, port=4420 00:24:00.621 [2024-11-27 08:06:54.699827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x557390 is same with the state(6) to be set 00:24:00.621 [2024-11-27 08:06:54.699840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x557390 (9): Bad file descriptor 00:24:00.621 [2024-11-27 08:06:54.699860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:00.621 [2024-11-27 08:06:54.699867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:00.621 [2024-11-27 08:06:54.699875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:00.621 [2024-11-27 08:06:54.699882] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:00.621 [2024-11-27 08:06:54.699886] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:00.621 [2024-11-27 08:06:54.699891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:00.621 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:00.621 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:00.621 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:00.621 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:00.621 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:00.621 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:00.621 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:00.621 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:00.621 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:00.621 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:00.621 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.621 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:00.621 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.621 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:00.621 [2024-11-27 08:06:54.709529] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:00.621 [2024-11-27 08:06:54.709542] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:00.621 [2024-11-27 08:06:54.709547] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:00.621 [2024-11-27 08:06:54.709553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:00.621 [2024-11-27 08:06:54.709568] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:00.621 [2024-11-27 08:06:54.709792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.621 [2024-11-27 08:06:54.709805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x557390 with addr=10.0.0.2, port=4420 00:24:00.621 [2024-11-27 08:06:54.709813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x557390 is same with the state(6) to be set 00:24:00.621 [2024-11-27 08:06:54.709825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x557390 (9): Bad file descriptor 00:24:00.621 [2024-11-27 08:06:54.709835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:00.621 [2024-11-27 08:06:54.709842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:00.621 [2024-11-27 08:06:54.709850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:00.621 [2024-11-27 08:06:54.709857] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:00.621 [2024-11-27 08:06:54.709861] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:00.621 [2024-11-27 08:06:54.709865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:00.621 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.621 [2024-11-27 08:06:54.719599] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:00.621 [2024-11-27 08:06:54.719612] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:00.621 [2024-11-27 08:06:54.719617] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:00.621 [2024-11-27 08:06:54.719621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:00.621 [2024-11-27 08:06:54.719635] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:00.621 [2024-11-27 08:06:54.719924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.621 [2024-11-27 08:06:54.719938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x557390 with addr=10.0.0.2, port=4420 00:24:00.621 [2024-11-27 08:06:54.719946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x557390 is same with the state(6) to be set 00:24:00.622 [2024-11-27 08:06:54.719964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x557390 (9): Bad file descriptor 00:24:00.622 [2024-11-27 08:06:54.719982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:00.622 [2024-11-27 08:06:54.719991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:00.622 [2024-11-27 08:06:54.719998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:00.622 [2024-11-27 08:06:54.720004] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:00.622 [2024-11-27 08:06:54.720009] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:00.622 [2024-11-27 08:06:54.720013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:00.880 [2024-11-27 08:06:54.729667] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:00.880 [2024-11-27 08:06:54.729680] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:00.880 [2024-11-27 08:06:54.729685] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:00.880 [2024-11-27 08:06:54.729689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:00.880 [2024-11-27 08:06:54.729705] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:00.880 [2024-11-27 08:06:54.729866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.880 [2024-11-27 08:06:54.729880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x557390 with addr=10.0.0.2, port=4420 00:24:00.880 [2024-11-27 08:06:54.729889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x557390 is same with the state(6) to be set 00:24:00.881 [2024-11-27 08:06:54.729901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x557390 (9): Bad file descriptor 00:24:00.881 [2024-11-27 08:06:54.729911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:00.881 [2024-11-27 08:06:54.729919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:00.881 [2024-11-27 08:06:54.729926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:00.881 [2024-11-27 08:06:54.729932] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:00.881 [2024-11-27 08:06:54.729937] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:00.881 [2024-11-27 08:06:54.729941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:00.881 [2024-11-27 08:06:54.732019] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:00.881 [2024-11-27 08:06:54.732035] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:00.881 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:24:00.881 08:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:01.817 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.077 08:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.013 [2024-11-27 08:06:57.054027] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:03.013 [2024-11-27 08:06:57.054046] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:03.013 [2024-11-27 08:06:57.054059] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:03.272 [2024-11-27 08:06:57.140309] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:03.532 [2024-11-27 08:06:57.440593] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:03.532 [2024-11-27 08:06:57.441235] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x56efe0:1 started. 00:24:03.532 [2024-11-27 08:06:57.442870] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:03.532 [2024-11-27 08:06:57.442895] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:03.532 [2024-11-27 08:06:57.444462] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x56efe0 was disconnected and freed. delete nvme_qpair. 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.532 request: 00:24:03.532 { 00:24:03.532 "name": "nvme", 00:24:03.532 "trtype": "tcp", 00:24:03.532 "traddr": "10.0.0.2", 00:24:03.532 "adrfam": "ipv4", 00:24:03.532 "trsvcid": "8009", 00:24:03.532 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:03.532 "wait_for_attach": true, 00:24:03.532 "method": "bdev_nvme_start_discovery", 00:24:03.532 "req_id": 1 00:24:03.532 } 00:24:03.532 Got JSON-RPC error response 00:24:03.532 response: 00:24:03.532 { 00:24:03.532 "code": -17, 00:24:03.532 "message": "File exists" 00:24:03.532 } 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:03.532 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.533 request: 00:24:03.533 { 00:24:03.533 "name": "nvme_second", 00:24:03.533 "trtype": "tcp", 00:24:03.533 "traddr": "10.0.0.2", 00:24:03.533 "adrfam": "ipv4", 00:24:03.533 "trsvcid": "8009", 00:24:03.533 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:03.533 "wait_for_attach": true, 00:24:03.533 "method": "bdev_nvme_start_discovery", 00:24:03.533 "req_id": 1 00:24:03.533 } 00:24:03.533 Got JSON-RPC error response 00:24:03.533 response: 00:24:03.533 { 00:24:03.533 "code": -17, 00:24:03.533 "message": "File exists" 00:24:03.533 } 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:03.533 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.792 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:03.792 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:03.792 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:03.792 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:03.792 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:03.792 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.792 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:03.792 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.792 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:03.792 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.792 08:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.730 [2024-11-27 08:06:58.678241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:04.730 [2024-11-27 08:06:58.678270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x558130 with addr=10.0.0.2, port=8010 00:24:04.730 [2024-11-27 08:06:58.678283] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:04.730 [2024-11-27 08:06:58.678290] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:04.730 [2024-11-27 08:06:58.678300] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:05.680 [2024-11-27 08:06:59.680762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.680 [2024-11-27 08:06:59.680788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x558130 with addr=10.0.0.2, port=8010 00:24:05.680 [2024-11-27 08:06:59.680800] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:05.680 [2024-11-27 08:06:59.680806] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:05.680 [2024-11-27 08:06:59.680812] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:06.615 [2024-11-27 08:07:00.682944] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:06.615 request: 00:24:06.615 { 00:24:06.615 "name": "nvme_second", 00:24:06.615 "trtype": "tcp", 00:24:06.615 "traddr": "10.0.0.2", 00:24:06.615 "adrfam": "ipv4", 00:24:06.615 "trsvcid": "8010", 00:24:06.615 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:06.615 "wait_for_attach": false, 00:24:06.615 "attach_timeout_ms": 3000, 00:24:06.615 "method": "bdev_nvme_start_discovery", 00:24:06.615 "req_id": 1 00:24:06.615 } 00:24:06.615 Got JSON-RPC error response 00:24:06.615 response: 00:24:06.615 { 00:24:06.615 "code": -110, 00:24:06.615 "message": "Connection timed out" 00:24:06.615 } 00:24:06.615 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:06.615 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:06.616 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:06.616 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:06.616 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:06.616 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:06.616 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:06.616 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:06.616 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:06.616 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:06.616 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.616 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.616 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2552801 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:06.875 rmmod nvme_tcp 00:24:06.875 rmmod nvme_fabrics 00:24:06.875 rmmod nvme_keyring 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2552776 ']' 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2552776 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2552776 ']' 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2552776 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2552776 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2552776' 00:24:06.875 killing process with pid 2552776 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2552776 00:24:06.875 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2552776 00:24:07.135 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:07.135 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:07.135 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:07.135 08:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:07.136 08:07:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:07.136 08:07:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:07.136 08:07:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:07.136 08:07:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:07.136 08:07:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:07.136 08:07:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.136 08:07:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.136 08:07:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.043 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:09.043 00:24:09.043 real 0m17.723s 00:24:09.043 user 0m22.539s 00:24:09.043 sys 0m5.392s 00:24:09.043 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:09.043 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.043 ************************************ 00:24:09.043 END TEST nvmf_host_discovery 00:24:09.043 ************************************ 00:24:09.043 08:07:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:09.043 08:07:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:09.043 08:07:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:09.043 08:07:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.043 ************************************ 00:24:09.043 START TEST nvmf_host_multipath_status 00:24:09.043 ************************************ 00:24:09.043 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:09.304 * Looking for test storage... 00:24:09.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:09.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.304 --rc genhtml_branch_coverage=1 00:24:09.304 --rc genhtml_function_coverage=1 00:24:09.304 --rc genhtml_legend=1 00:24:09.304 --rc geninfo_all_blocks=1 00:24:09.304 --rc geninfo_unexecuted_blocks=1 00:24:09.304 00:24:09.304 ' 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:09.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.304 --rc genhtml_branch_coverage=1 00:24:09.304 --rc genhtml_function_coverage=1 00:24:09.304 --rc genhtml_legend=1 00:24:09.304 --rc geninfo_all_blocks=1 00:24:09.304 --rc geninfo_unexecuted_blocks=1 00:24:09.304 00:24:09.304 ' 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:09.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.304 --rc genhtml_branch_coverage=1 00:24:09.304 --rc genhtml_function_coverage=1 00:24:09.304 --rc genhtml_legend=1 00:24:09.304 --rc geninfo_all_blocks=1 00:24:09.304 --rc geninfo_unexecuted_blocks=1 00:24:09.304 00:24:09.304 ' 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:09.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.304 --rc genhtml_branch_coverage=1 00:24:09.304 --rc genhtml_function_coverage=1 00:24:09.304 --rc genhtml_legend=1 00:24:09.304 --rc geninfo_all_blocks=1 00:24:09.304 --rc geninfo_unexecuted_blocks=1 00:24:09.304 00:24:09.304 ' 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.304 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:09.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:09.305 08:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:14.572 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:14.573 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:14.573 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:14.573 Found net devices under 0000:86:00.0: cvl_0_0 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:14.573 Found net devices under 0000:86:00.1: cvl_0_1 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:14.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:24:14.573 00:24:14.573 --- 10.0.0.2 ping statistics --- 00:24:14.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.573 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:24:14.573 00:24:14.573 --- 10.0.0.1 ping statistics --- 00:24:14.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.573 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2557966 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2557966 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2557966 ']' 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:14.573 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.574 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:14.574 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:14.574 [2024-11-27 08:07:08.574974] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:24:14.574 [2024-11-27 08:07:08.575024] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.574 [2024-11-27 08:07:08.642033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:14.832 [2024-11-27 08:07:08.685718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.832 [2024-11-27 08:07:08.685754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.832 [2024-11-27 08:07:08.685763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.832 [2024-11-27 08:07:08.685769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.832 [2024-11-27 08:07:08.685774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.832 [2024-11-27 08:07:08.686992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.833 [2024-11-27 08:07:08.686994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.833 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:14.833 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:14.833 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:14.833 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:14.833 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:14.833 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.833 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2557966 00:24:14.833 08:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:15.091 [2024-11-27 08:07:08.984446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.091 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:15.091 Malloc0 00:24:15.349 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:15.349 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:15.607 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:15.866 [2024-11-27 08:07:09.741036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.866 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:15.866 [2024-11-27 08:07:09.933515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:15.867 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2558229 00:24:15.867 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:15.867 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:15.867 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2558229 /var/tmp/bdevperf.sock 00:24:15.867 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2558229 ']' 00:24:15.867 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:15.867 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.867 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:15.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:15.867 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.867 08:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:16.126 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:16.126 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:16.126 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:16.384 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:16.641 Nvme0n1 00:24:16.898 08:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:17.156 Nvme0n1 00:24:17.156 08:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:17.156 08:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:19.689 08:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:19.689 08:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:19.689 08:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:19.689 08:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:20.681 08:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:20.681 08:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:20.681 08:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.681 08:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:20.939 08:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.940 08:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:20.940 08:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.940 08:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:20.940 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:20.940 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:20.940 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.940 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:21.198 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.198 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:21.198 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:21.198 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.456 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.457 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:21.457 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.457 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:21.716 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.716 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:21.716 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.716 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:21.716 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.716 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:21.716 08:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:21.975 08:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:22.234 08:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:23.169 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:23.169 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:23.169 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.169 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:23.427 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:23.427 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:23.427 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.427 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:23.685 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.685 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:23.685 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.685 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:23.944 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.944 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:23.944 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.944 08:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:24.203 08:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.203 08:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:24.203 08:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.203 08:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:24.203 08:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.203 08:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:24.203 08:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.203 08:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:24.462 08:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.462 08:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:24.462 08:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:24.720 08:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:24.979 08:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:25.911 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:25.911 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:25.911 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.912 08:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:26.169 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.169 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:26.170 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.170 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:26.428 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.428 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:26.428 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.428 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:26.428 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.428 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:26.428 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:26.429 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.686 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.686 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:26.686 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:26.686 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.945 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.945 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:26.945 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:26.945 08:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.204 08:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.204 08:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:27.204 08:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:27.462 08:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:27.462 08:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:28.836 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:28.836 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:28.836 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.836 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:28.836 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.836 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:28.836 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.836 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:29.094 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:29.094 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:29.094 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.094 08:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:29.094 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.094 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:29.094 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.094 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:29.352 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.353 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:29.353 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.353 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:29.611 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.611 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:29.611 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:29.611 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.869 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:29.869 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:29.869 08:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:30.128 08:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:30.128 08:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:31.503 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:31.503 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:31.503 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.503 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:31.503 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:31.503 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:31.503 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:31.503 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.761 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:31.761 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:31.761 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.762 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:31.762 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.762 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:31.762 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.762 08:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:32.020 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.020 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:32.020 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.020 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:32.289 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:32.289 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:32.289 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.289 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:32.553 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:32.553 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:32.553 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:32.553 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:32.811 08:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:33.746 08:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:33.746 08:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:33.746 08:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.746 08:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:34.004 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:34.004 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:34.004 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.004 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:34.262 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.262 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:34.262 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.262 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:34.520 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.520 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:34.520 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.520 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:34.520 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.520 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:34.520 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.520 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:34.778 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:34.778 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:34.778 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:34.778 08:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.036 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.036 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:35.299 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:35.300 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:35.565 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:35.823 08:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:36.758 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:36.758 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:36.758 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.758 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:37.016 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.016 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:37.016 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:37.016 08:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.016 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.016 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:37.016 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.016 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:37.274 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.274 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:37.274 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.274 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:37.532 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.532 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:37.532 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:37.532 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.791 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.791 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:37.791 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.791 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:38.049 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.049 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:38.049 08:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:38.049 08:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:38.308 08:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:39.242 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:39.242 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:39.242 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.242 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:39.500 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:39.500 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:39.500 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.500 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:39.759 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.759 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:39.759 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.759 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:40.017 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.017 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:40.017 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.017 08:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:40.276 08:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.276 08:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:40.276 08:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.276 08:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:40.276 08:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.276 08:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:40.276 08:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.276 08:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:40.534 08:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.534 08:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:40.534 08:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:40.792 08:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:41.050 08:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:41.984 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:41.984 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:41.984 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.984 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:42.242 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.242 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:42.242 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.242 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:42.500 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.500 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:42.500 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.500 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:42.758 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.758 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:42.759 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.759 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:42.759 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.759 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:42.759 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.759 08:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:43.017 08:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.017 08:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:43.017 08:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.017 08:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:43.275 08:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.275 08:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:43.275 08:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:43.533 08:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:43.791 08:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:44.726 08:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:44.726 08:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:44.726 08:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.726 08:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:44.984 08:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.984 08:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:44.984 08:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:44.984 08:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.984 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:44.984 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:44.984 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.984 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:45.243 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.243 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:45.243 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.243 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:45.503 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.503 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:45.503 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.503 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:45.761 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.761 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:45.761 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:45.761 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.020 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:46.020 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2558229 00:24:46.020 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2558229 ']' 00:24:46.020 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2558229 00:24:46.020 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:46.020 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.020 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2558229 00:24:46.020 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:46.020 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:46.020 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2558229' 00:24:46.020 killing process with pid 2558229 00:24:46.020 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2558229 00:24:46.020 08:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2558229 00:24:46.020 { 00:24:46.020 "results": [ 00:24:46.020 { 00:24:46.020 "job": "Nvme0n1", 00:24:46.020 "core_mask": "0x4", 00:24:46.020 "workload": "verify", 00:24:46.020 "status": "terminated", 00:24:46.020 "verify_range": { 00:24:46.020 "start": 0, 00:24:46.020 "length": 16384 00:24:46.020 }, 00:24:46.020 "queue_depth": 128, 00:24:46.020 "io_size": 4096, 00:24:46.020 "runtime": 28.620841, 00:24:46.020 "iops": 10245.960277687158, 00:24:46.020 "mibps": 40.02328233471546, 00:24:46.020 "io_failed": 0, 00:24:46.020 "io_timeout": 0, 00:24:46.020 "avg_latency_us": 12469.860449015998, 00:24:46.020 "min_latency_us": 443.43652173913046, 00:24:46.020 "max_latency_us": 3092843.2973913043 00:24:46.020 } 00:24:46.020 ], 00:24:46.020 "core_count": 1 00:24:46.020 } 00:24:46.020 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2558229 00:24:46.301 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:46.301 [2024-11-27 08:07:09.998703] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:24:46.301 [2024-11-27 08:07:09.998759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2558229 ] 00:24:46.301 [2024-11-27 08:07:10.062182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.301 [2024-11-27 08:07:10.108162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.301 Running I/O for 90 seconds... 00:24:46.301 10934.00 IOPS, 42.71 MiB/s [2024-11-27T07:07:40.410Z] 11036.50 IOPS, 43.11 MiB/s [2024-11-27T07:07:40.410Z] 11010.00 IOPS, 43.01 MiB/s [2024-11-27T07:07:40.410Z] 11044.00 IOPS, 43.14 MiB/s [2024-11-27T07:07:40.410Z] 11055.00 IOPS, 43.18 MiB/s [2024-11-27T07:07:40.410Z] 11097.83 IOPS, 43.35 MiB/s [2024-11-27T07:07:40.410Z] 11080.14 IOPS, 43.28 MiB/s [2024-11-27T07:07:40.410Z] 11080.25 IOPS, 43.28 MiB/s [2024-11-27T07:07:40.410Z] 11066.00 IOPS, 43.23 MiB/s [2024-11-27T07:07:40.410Z] 11056.20 IOPS, 43.19 MiB/s [2024-11-27T07:07:40.410Z] 11049.55 IOPS, 43.16 MiB/s [2024-11-27T07:07:40.410Z] 11041.00 IOPS, 43.13 MiB/s [2024-11-27T07:07:40.410Z] [2024-11-27 08:07:23.989803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.989843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.989864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.989874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.989887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.989895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.989908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.989916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.989928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.989935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.989956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.989964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.989977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.989984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.989997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.990005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.990018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.990026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.990040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.990053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.990066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.990074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.990087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.990093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.990107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.990116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.990129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.990136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.990150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.990158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.990172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.990180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.990192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.990199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.990213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.990221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.990233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.990240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.990252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.301 [2024-11-27 08:07:23.990259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.301 [2024-11-27 08:07:23.990271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.302 [2024-11-27 08:07:23.990735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.302 [2024-11-27 08:07:23.990755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.302 [2024-11-27 08:07:23.990774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.302 [2024-11-27 08:07:23.990796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.302 [2024-11-27 08:07:23.990816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.302 [2024-11-27 08:07:23.990835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.302 [2024-11-27 08:07:23.990856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.302 [2024-11-27 08:07:23.990875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.302 [2024-11-27 08:07:23.990895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.302 [2024-11-27 08:07:23.990916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.302 [2024-11-27 08:07:23.990935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.302 [2024-11-27 08:07:23.990959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.302 [2024-11-27 08:07:23.990979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.990992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.302 [2024-11-27 08:07:23.990999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.991012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.302 [2024-11-27 08:07:23.991019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.302 [2024-11-27 08:07:23.991496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.302 [2024-11-27 08:07:23.991513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.991985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.991998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.992005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.992019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.992027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.992040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.992047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.992059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.992066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.992079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.992087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.992099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.992106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.992120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.992127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.992139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.992146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.992159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.992166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.992179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.992185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.992198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.992205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.992218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.992225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.992237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.992244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.992261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.992268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.992282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.992289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.303 [2024-11-27 08:07:23.992302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.303 [2024-11-27 08:07:23.992309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.992322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.992329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.992342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.992349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.992362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.992368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.992381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.992388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.992401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.992407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.992419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.992427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.992439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.992447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.992459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.992466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.992889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.992902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.992918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.992928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.992941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.992954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.992968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.992975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.992988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.992996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.304 [2024-11-27 08:07:23.993016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.304 [2024-11-27 08:07:23.993036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.304 [2024-11-27 08:07:23.993064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.304 [2024-11-27 08:07:23.993084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.304 [2024-11-27 08:07:23.993103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.304 [2024-11-27 08:07:23.993122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.304 [2024-11-27 08:07:23.993142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.304 [2024-11-27 08:07:23.993161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.304 [2024-11-27 08:07:23.993185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.304 [2024-11-27 08:07:23.993205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.304 [2024-11-27 08:07:23.993225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.304 [2024-11-27 08:07:23.993244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.304 [2024-11-27 08:07:23.993264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.304 [2024-11-27 08:07:23.993285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.304 [2024-11-27 08:07:23.993304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.304 [2024-11-27 08:07:23.993324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.993506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.993527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.993549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.993569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.993588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.993610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.993630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.993650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.993671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.304 [2024-11-27 08:07:23.993683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.304 [2024-11-27 08:07:23.993691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.993703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.993711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.993723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.993730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.993743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.993751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.993764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.993771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.993783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.993790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.993803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.993810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.993822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.993829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.993843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.993851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.993865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.993872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.993884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.993891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.993903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.993911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.993924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.993932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.993944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.993958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.993971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.993977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.993991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.993998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.994011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.994022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.994034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.994041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.994055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.994062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.994075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.994081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.994094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.994103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.994116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.994124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.994136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.994143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.994155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.994162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.994174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.994182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.994199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.994206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.994218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.994225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.994238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.994245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.994257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.994264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.994276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.994283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.994296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.994304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.994317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.994324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.994336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.305 [2024-11-27 08:07:23.994345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.305 [2024-11-27 08:07:23.994358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:23.994365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:23.994377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:23.994384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:23.994396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:23.994403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:23.994416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.306 [2024-11-27 08:07:23.994424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.005511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.306 [2024-11-27 08:07:24.005522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.005536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.306 [2024-11-27 08:07:24.005545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.005558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.306 [2024-11-27 08:07:24.005564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.005577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.306 [2024-11-27 08:07:24.005587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.005601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.306 [2024-11-27 08:07:24.005609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.005623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.306 [2024-11-27 08:07:24.005631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.005644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.306 [2024-11-27 08:07:24.005652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.005665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.306 [2024-11-27 08:07:24.005672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.005687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.306 [2024-11-27 08:07:24.005694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.005708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.306 [2024-11-27 08:07:24.005715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.005729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.306 [2024-11-27 08:07:24.005736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.005749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.306 [2024-11-27 08:07:24.005756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.005769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.306 [2024-11-27 08:07:24.005777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.005789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.306 [2024-11-27 08:07:24.005796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.005809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.005816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.005829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.005836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.306 [2024-11-27 08:07:24.006780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.306 [2024-11-27 08:07:24.006793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.006800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.006812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.006820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.006833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.006839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.006851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.006859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.006872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.006879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.006891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.006898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.006911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.006920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.006933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.006940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.006961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.006970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.006983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.006993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.307 [2024-11-27 08:07:24.007418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.307 [2024-11-27 08:07:24.007438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.307 [2024-11-27 08:07:24.007458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.307 [2024-11-27 08:07:24.007480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.307 [2024-11-27 08:07:24.007503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.307 [2024-11-27 08:07:24.007523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.307 [2024-11-27 08:07:24.007544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.307 [2024-11-27 08:07:24.007565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.307 [2024-11-27 08:07:24.007578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.307 [2024-11-27 08:07:24.007585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.007597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.308 [2024-11-27 08:07:24.007605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.007618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.308 [2024-11-27 08:07:24.007625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.007638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.308 [2024-11-27 08:07:24.007646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.007658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.308 [2024-11-27 08:07:24.007665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.007678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.308 [2024-11-27 08:07:24.007686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.007698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.308 [2024-11-27 08:07:24.007706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.007719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.308 [2024-11-27 08:07:24.007726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.308 [2024-11-27 08:07:24.008291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.308 [2024-11-27 08:07:24.008932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.308 [2024-11-27 08:07:24.008940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.008959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.008967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.008981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.008988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.009008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.009029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.009050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.009072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.009094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.009114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.009134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.009155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.009176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.009197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.009217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.009237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.309 [2024-11-27 08:07:24.009257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.309 [2024-11-27 08:07:24.009277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.309 [2024-11-27 08:07:24.009297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.309 [2024-11-27 08:07:24.009319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.309 [2024-11-27 08:07:24.009340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.309 [2024-11-27 08:07:24.009361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.309 [2024-11-27 08:07:24.009382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.309 [2024-11-27 08:07:24.009401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.309 [2024-11-27 08:07:24.009423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.309 [2024-11-27 08:07:24.009443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.309 [2024-11-27 08:07:24.009463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.309 [2024-11-27 08:07:24.009482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.309 [2024-11-27 08:07:24.009502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.309 [2024-11-27 08:07:24.009522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.309 [2024-11-27 08:07:24.009541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.009555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.009564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.010101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.010116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.010131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.010139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.010152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.010161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.010175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.010183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.010197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.010204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.010219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.010226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.010239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.010246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.010259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.010266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.309 [2024-11-27 08:07:24.010280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.309 [2024-11-27 08:07:24.010288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.010784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.010791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.017106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.017117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.017131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.017138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.017151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.017158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.017171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.017181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.017196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.017203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.017216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.017224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.017237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.017244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.017259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.017267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.017280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.017288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.017301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.310 [2024-11-27 08:07:24.017310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.310 [2024-11-27 08:07:24.017322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.017330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.017350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.017370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.017391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.017412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.017433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.017454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.017474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.311 [2024-11-27 08:07:24.017494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.311 [2024-11-27 08:07:24.017514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.311 [2024-11-27 08:07:24.017534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.311 [2024-11-27 08:07:24.017556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.311 [2024-11-27 08:07:24.017576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.311 [2024-11-27 08:07:24.017596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.311 [2024-11-27 08:07:24.017619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.311 [2024-11-27 08:07:24.017639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.311 [2024-11-27 08:07:24.017659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.311 [2024-11-27 08:07:24.017682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.311 [2024-11-27 08:07:24.017705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.311 [2024-11-27 08:07:24.017725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.311 [2024-11-27 08:07:24.017746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.017759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.311 [2024-11-27 08:07:24.017767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.018344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.311 [2024-11-27 08:07:24.018361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.018377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.311 [2024-11-27 08:07:24.018385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.018399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.018408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.018421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.018430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.018443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.018452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.018466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.018475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.018488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.018496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.018510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.018517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.018534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.018541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.018556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.018563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.018579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.018586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.018600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.018607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.018620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.018627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.018642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.018650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.018663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.018670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.018683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.018691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.018705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.311 [2024-11-27 08:07:24.018712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.311 [2024-11-27 08:07:24.018725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.018732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.018745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.018753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.018766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.018773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.018786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.018796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.018810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.018817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.018831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.018840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.018853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.018862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.018875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.018882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.018895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.018903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.018918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.018927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.018941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.018956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.018970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.018977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.018990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.018998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.019018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.019039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.019061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.019081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.019102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.019122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.019144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.019166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.019186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.019206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.019226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.019247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.019270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.019290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.019310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.019332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.312 [2024-11-27 08:07:24.019352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.312 [2024-11-27 08:07:24.019375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.312 [2024-11-27 08:07:24.019396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.312 [2024-11-27 08:07:24.019416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.312 [2024-11-27 08:07:24.019437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.312 [2024-11-27 08:07:24.019458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.312 [2024-11-27 08:07:24.019479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.312 [2024-11-27 08:07:24.019499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.312 [2024-11-27 08:07:24.019520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.312 [2024-11-27 08:07:24.019532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.312 [2024-11-27 08:07:24.019539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.019553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.313 [2024-11-27 08:07:24.019560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.019574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.313 [2024-11-27 08:07:24.019582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.019594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.313 [2024-11-27 08:07:24.019602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.019615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.313 [2024-11-27 08:07:24.019622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.019635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.313 [2024-11-27 08:07:24.019643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.019656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.313 [2024-11-27 08:07:24.019663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.313 [2024-11-27 08:07:24.020849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.313 [2024-11-27 08:07:24.020863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.020871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.020884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.020892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.020904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.020912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.020925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.020933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.020946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.020959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.020972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.020980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.020993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.021001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.021022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.021042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.021062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.021082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.021103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.021124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.021146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.021167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.021187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.021208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.021228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.021249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.314 [2024-11-27 08:07:24.021268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.314 [2024-11-27 08:07:24.021288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.314 [2024-11-27 08:07:24.021309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.314 [2024-11-27 08:07:24.021331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.314 [2024-11-27 08:07:24.021351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.314 [2024-11-27 08:07:24.021374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.314 [2024-11-27 08:07:24.021394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.314 [2024-11-27 08:07:24.021415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.314 [2024-11-27 08:07:24.021435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.314 [2024-11-27 08:07:24.021455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.314 [2024-11-27 08:07:24.021475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.314 [2024-11-27 08:07:24.021495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.314 [2024-11-27 08:07:24.021516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.021528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.314 [2024-11-27 08:07:24.021536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.022095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.314 [2024-11-27 08:07:24.022110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.022125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.314 [2024-11-27 08:07:24.022132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.314 [2024-11-27 08:07:24.022145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.315 [2024-11-27 08:07:24.022154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.315 [2024-11-27 08:07:24.022962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.315 [2024-11-27 08:07:24.022976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.022986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.023008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.023029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.023050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.023070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.023093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.023116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.316 [2024-11-27 08:07:24.023139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.316 [2024-11-27 08:07:24.023159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.316 [2024-11-27 08:07:24.023179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.316 [2024-11-27 08:07:24.023200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.316 [2024-11-27 08:07:24.023223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.316 [2024-11-27 08:07:24.023246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.316 [2024-11-27 08:07:24.023266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.316 [2024-11-27 08:07:24.023286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.316 [2024-11-27 08:07:24.023307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.316 [2024-11-27 08:07:24.023328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.316 [2024-11-27 08:07:24.023349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.316 [2024-11-27 08:07:24.023369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.316 [2024-11-27 08:07:24.023390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.316 [2024-11-27 08:07:24.023411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.316 [2024-11-27 08:07:24.023944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.023975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.023988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.023998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.024012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.024019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.024032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.024040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.024053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.024060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.024073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.024081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.024094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.024102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.024116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.024124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.024138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.024145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.024159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.024167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.024180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.024188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.024201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.024211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.024227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.024236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.024250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.024260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.024275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.024283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.024296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.024306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.024319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.316 [2024-11-27 08:07:24.024327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.316 [2024-11-27 08:07:24.024340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.024348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.024361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.024370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.024386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.024396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.024411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.024419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.024433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.024443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.024459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.024469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.024483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.024490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.024504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.024512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.024525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.024532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.024547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.024555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.024567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.024575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.024595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.024604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.024619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.024628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.024641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.024649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.024662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.024670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.317 [2024-11-27 08:07:24.030454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.317 [2024-11-27 08:07:24.030475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.317 [2024-11-27 08:07:24.030496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.317 [2024-11-27 08:07:24.030519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.317 [2024-11-27 08:07:24.030541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.317 [2024-11-27 08:07:24.030554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.318 [2024-11-27 08:07:24.030561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.030574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.318 [2024-11-27 08:07:24.030582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.030596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.318 [2024-11-27 08:07:24.030604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.030617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.318 [2024-11-27 08:07:24.030625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.030638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.318 [2024-11-27 08:07:24.030646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.030660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.318 [2024-11-27 08:07:24.030668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.030682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.318 [2024-11-27 08:07:24.030690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.030706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.318 [2024-11-27 08:07:24.030714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.318 [2024-11-27 08:07:24.031293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.318 [2024-11-27 08:07:24.031318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.318 [2024-11-27 08:07:24.031340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.318 [2024-11-27 08:07:24.031361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.318 [2024-11-27 08:07:24.031955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.318 [2024-11-27 08:07:24.031969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.031978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.031990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.031999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.032020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.032041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.032066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.032088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.032115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.032139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.032161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.032183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.032206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.032228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.032248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.032270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.032292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.032313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.032335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.032357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.319 [2024-11-27 08:07:24.032378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.319 [2024-11-27 08:07:24.032400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.319 [2024-11-27 08:07:24.032422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.319 [2024-11-27 08:07:24.032442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.319 [2024-11-27 08:07:24.032464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.319 [2024-11-27 08:07:24.032487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.319 [2024-11-27 08:07:24.032510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.319 [2024-11-27 08:07:24.032534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.319 [2024-11-27 08:07:24.032560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.319 [2024-11-27 08:07:24.032584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.319 [2024-11-27 08:07:24.032604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.319 [2024-11-27 08:07:24.032627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.032642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.319 [2024-11-27 08:07:24.032650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.033162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.319 [2024-11-27 08:07:24.033176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.033191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.319 [2024-11-27 08:07:24.033201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.033215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.033223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.319 [2024-11-27 08:07:24.033236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.319 [2024-11-27 08:07:24.033244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.033982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.033990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.034002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.034010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.034024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.034032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.034046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.034055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.034068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.034076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.320 [2024-11-27 08:07:24.034088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.320 [2024-11-27 08:07:24.034096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.034118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.034140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.034162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.034183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.034212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.034233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.034256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.034278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.034299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.034319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.034341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.321 [2024-11-27 08:07:24.034363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.321 [2024-11-27 08:07:24.034387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.321 [2024-11-27 08:07:24.034410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.321 [2024-11-27 08:07:24.034432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.321 [2024-11-27 08:07:24.034454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.321 [2024-11-27 08:07:24.034476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.321 [2024-11-27 08:07:24.034498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.321 [2024-11-27 08:07:24.034520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.321 [2024-11-27 08:07:24.034541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.321 [2024-11-27 08:07:24.034561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.034575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.321 [2024-11-27 08:07:24.034583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.035126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.321 [2024-11-27 08:07:24.035140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.035155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.321 [2024-11-27 08:07:24.035162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.035176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.321 [2024-11-27 08:07:24.035183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.035196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.321 [2024-11-27 08:07:24.035203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.035215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.321 [2024-11-27 08:07:24.035222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.035236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.035244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.035257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.035266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.035281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.035288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.035302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.035310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.035324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.035332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.035345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.035352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.035364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.035372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.035385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.035393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.035406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.035413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.035426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.035432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.035444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.321 [2024-11-27 08:07:24.035452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.321 [2024-11-27 08:07:24.035466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.035983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.035991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.036004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.036011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.036024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.036033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.036047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.036058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.036074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.036083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.036096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.036105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.036118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.036126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.036139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.036147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.036160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.036167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.036180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.322 [2024-11-27 08:07:24.036189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.036202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.322 [2024-11-27 08:07:24.036210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.036223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.322 [2024-11-27 08:07:24.036231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.036244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.322 [2024-11-27 08:07:24.036251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.036264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.322 [2024-11-27 08:07:24.036272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.036285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.322 [2024-11-27 08:07:24.036295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.322 [2024-11-27 08:07:24.036309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.323 [2024-11-27 08:07:24.036319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.036332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.323 [2024-11-27 08:07:24.036339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.036353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.323 [2024-11-27 08:07:24.036361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.036374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.323 [2024-11-27 08:07:24.036382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.036395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.323 [2024-11-27 08:07:24.036403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.036416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.323 [2024-11-27 08:07:24.036425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.036439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.323 [2024-11-27 08:07:24.036448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.036973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.323 [2024-11-27 08:07:24.036988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.323 [2024-11-27 08:07:24.037012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.323 [2024-11-27 08:07:24.037033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.323 [2024-11-27 08:07:24.037564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.323 [2024-11-27 08:07:24.037572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.037584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.037593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.037607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.037616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.037629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.037638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.037650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.037658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.037671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.037680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.037694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.037702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.037714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.037722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.037734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.037742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.037755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.037764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.037777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.037786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.041844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.041855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.041869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.041877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.041889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.041898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.041914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.041922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.041936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.041945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.041964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.041972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.041985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.041994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.042017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.042038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.042059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.042082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.042103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.042124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.042145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.042167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.042189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.324 [2024-11-27 08:07:24.042209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.324 [2024-11-27 08:07:24.042232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.324 [2024-11-27 08:07:24.042253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.324 [2024-11-27 08:07:24.042274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.324 [2024-11-27 08:07:24.042295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.324 [2024-11-27 08:07:24.042316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.324 [2024-11-27 08:07:24.042337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.324 [2024-11-27 08:07:24.042359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.324 [2024-11-27 08:07:24.042379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.324 [2024-11-27 08:07:24.042401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.324 [2024-11-27 08:07:24.042424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.042991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.324 [2024-11-27 08:07:24.043009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.324 [2024-11-27 08:07:24.043025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.325 [2024-11-27 08:07:24.043033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.325 [2024-11-27 08:07:24.043058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.325 [2024-11-27 08:07:24.043078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.325 [2024-11-27 08:07:24.043099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.325 [2024-11-27 08:07:24.043125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.325 [2024-11-27 08:07:24.043862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.325 [2024-11-27 08:07:24.043878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.043886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.043899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.043908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.043921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.043929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.043943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.043955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.043969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.043978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.043993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.044001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.044021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.044043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.044065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.044087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.044108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.326 [2024-11-27 08:07:24.044131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.326 [2024-11-27 08:07:24.044154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.326 [2024-11-27 08:07:24.044174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.326 [2024-11-27 08:07:24.044195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.326 [2024-11-27 08:07:24.044217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.326 [2024-11-27 08:07:24.044240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.326 [2024-11-27 08:07:24.044262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.326 [2024-11-27 08:07:24.044283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.326 [2024-11-27 08:07:24.044305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.326 [2024-11-27 08:07:24.044327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.326 [2024-11-27 08:07:24.044348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.326 [2024-11-27 08:07:24.044871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.326 [2024-11-27 08:07:24.044898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.326 [2024-11-27 08:07:24.044921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.326 [2024-11-27 08:07:24.044942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.044971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.044985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.044993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.045006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.045014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.045027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.045034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.045047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.045056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.045070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.045079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.045092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.045100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.045113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.045121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.045135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.045144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.045157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.045168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.045181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.045189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.045203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.045212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.045226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.326 [2024-11-27 08:07:24.045233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.326 [2024-11-27 08:07:24.045247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.045981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.045991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.046005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.046014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.046027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.046035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.046048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.046057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.046071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.046079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.327 [2024-11-27 08:07:24.046098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.327 [2024-11-27 08:07:24.046105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.046119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-27 08:07:24.046128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.046142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-27 08:07:24.046150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.046166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-27 08:07:24.046174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.046188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-27 08:07:24.046195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.046209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-27 08:07:24.046217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.046231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-27 08:07:24.046238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.046251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-27 08:07:24.046263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.046277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-27 08:07:24.046285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.046299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-27 08:07:24.046308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.046846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-27 08:07:24.046860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.046875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-27 08:07:24.046884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.046898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-27 08:07:24.046907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.046920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-27 08:07:24.046927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.046941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-27 08:07:24.046956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.046970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-27 08:07:24.046978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.046992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-27 08:07:24.047007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.328 [2024-11-27 08:07:24.047411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.328 [2024-11-27 08:07:24.047419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.047981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.329 [2024-11-27 08:07:24.047989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.048003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.329 [2024-11-27 08:07:24.048012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.048025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.329 [2024-11-27 08:07:24.048034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.048048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.329 [2024-11-27 08:07:24.048056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.048069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.329 [2024-11-27 08:07:24.048076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.048090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.329 [2024-11-27 08:07:24.048098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.048113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.329 [2024-11-27 08:07:24.048122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.048136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.329 [2024-11-27 08:07:24.048144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.048158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.329 [2024-11-27 08:07:24.048165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.048178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.329 [2024-11-27 08:07:24.048186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.048200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.329 [2024-11-27 08:07:24.048208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.048717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.329 [2024-11-27 08:07:24.048731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.048746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.329 [2024-11-27 08:07:24.048754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.329 [2024-11-27 08:07:24.048767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.329 [2024-11-27 08:07:24.048776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.048788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.330 [2024-11-27 08:07:24.048798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.048812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.330 [2024-11-27 08:07:24.048820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.048833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.048841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.048855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.048864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.048876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.048884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.048899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.048907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.048923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.048931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.048944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.048958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.048974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.048982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.048995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.330 [2024-11-27 08:07:24.049634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.330 [2024-11-27 08:07:24.049642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.049656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.049663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.049677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.049685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.049700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.049709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.049722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.049731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.049747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.049755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.049768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.049775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.049787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.049795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.049809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.049817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.049831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.049839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.049852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.049859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.049873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.049880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.049892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.049901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.049915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.049924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.049937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.049945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.049964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.049971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.049984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.331 [2024-11-27 08:07:24.049993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.331 [2024-11-27 08:07:24.050016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.331 [2024-11-27 08:07:24.050038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.331 [2024-11-27 08:07:24.050060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.331 [2024-11-27 08:07:24.050082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.331 [2024-11-27 08:07:24.050105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.331 [2024-11-27 08:07:24.050125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.331 [2024-11-27 08:07:24.050146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.331 [2024-11-27 08:07:24.050698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.331 [2024-11-27 08:07:24.050723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.331 [2024-11-27 08:07:24.050745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.331 [2024-11-27 08:07:24.050766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.331 [2024-11-27 08:07:24.050788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.331 [2024-11-27 08:07:24.050813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.331 [2024-11-27 08:07:24.050833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.331 [2024-11-27 08:07:24.050855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.050877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.050898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.050920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.050942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.050972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.050986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.050994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.051007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.331 [2024-11-27 08:07:24.051014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.331 [2024-11-27 08:07:24.051029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.332 [2024-11-27 08:07:24.051761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.332 [2024-11-27 08:07:24.051768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.051781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.051789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.051803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.051813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.051825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.051834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.051848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.333 [2024-11-27 08:07:24.051865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.051882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.333 [2024-11-27 08:07:24.051890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.051904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.333 [2024-11-27 08:07:24.051913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.051927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.333 [2024-11-27 08:07:24.051935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.051953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.333 [2024-11-27 08:07:24.051961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.051975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.333 [2024-11-27 08:07:24.051984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.051998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.333 [2024-11-27 08:07:24.052006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.333 [2024-11-27 08:07:24.052029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.333 [2024-11-27 08:07:24.052050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.333 [2024-11-27 08:07:24.052566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.333 [2024-11-27 08:07:24.052588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.333 [2024-11-27 08:07:24.052610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.333 [2024-11-27 08:07:24.052634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.333 [2024-11-27 08:07:24.052657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.333 [2024-11-27 08:07:24.052679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.052701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.052724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.052746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.052766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.052789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.052811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.052833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.052855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.052877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.052899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.052922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.052945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.052974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.052988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.052995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.053009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.053016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.053030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.053038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.053052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.053060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.053073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.053081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.053093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.053101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.053115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.053124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.333 [2024-11-27 08:07:24.053138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.333 [2024-11-27 08:07:24.053146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.053777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.053786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.057581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.057592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.057605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.334 [2024-11-27 08:07:24.057614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.057628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.334 [2024-11-27 08:07:24.057636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.057649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.334 [2024-11-27 08:07:24.057656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.057672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.334 [2024-11-27 08:07:24.057680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.057694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.334 [2024-11-27 08:07:24.057702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.057715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.334 [2024-11-27 08:07:24.057723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.057737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.334 [2024-11-27 08:07:24.057745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.057759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.334 [2024-11-27 08:07:24.057766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.334 [2024-11-27 08:07:24.057969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.334 [2024-11-27 08:07:24.057985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.335 [2024-11-27 08:07:24.058031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.335 [2024-11-27 08:07:24.058055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.335 [2024-11-27 08:07:24.058080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.335 [2024-11-27 08:07:24.058106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.335 [2024-11-27 08:07:24.058131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.335 [2024-11-27 08:07:24.058156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.335 [2024-11-27 08:07:24.058180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.335 [2024-11-27 08:07:24.058204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.335 [2024-11-27 08:07:24.058976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.335 [2024-11-27 08:07:24.058984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.336 [2024-11-27 08:07:24.059013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.336 [2024-11-27 08:07:24.059037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.336 [2024-11-27 08:07:24.059064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.336 [2024-11-27 08:07:24.059088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.336 [2024-11-27 08:07:24.059113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.336 [2024-11-27 08:07:24.059139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.336 [2024-11-27 08:07:24.059163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.336 [2024-11-27 08:07:24.059188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.336 [2024-11-27 08:07:24.059212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.336 [2024-11-27 08:07:24.059246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.336 [2024-11-27 08:07:24.059270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.336 [2024-11-27 08:07:24.059295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.336 [2024-11-27 08:07:24.059320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.336 [2024-11-27 08:07:24.059347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.336 [2024-11-27 08:07:24.059371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.336 [2024-11-27 08:07:24.059396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.336 [2024-11-27 08:07:24.059422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.336 [2024-11-27 08:07:24.059446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.336 [2024-11-27 08:07:24.059470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.336 [2024-11-27 08:07:24.059497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.336 [2024-11-27 08:07:24.059522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.336 [2024-11-27 08:07:24.059547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:24.059672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.336 [2024-11-27 08:07:24.059683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.336 10763.00 IOPS, 42.04 MiB/s [2024-11-27T07:07:40.445Z] 9994.21 IOPS, 39.04 MiB/s [2024-11-27T07:07:40.445Z] 9327.93 IOPS, 36.44 MiB/s [2024-11-27T07:07:40.445Z] 8935.50 IOPS, 34.90 MiB/s [2024-11-27T07:07:40.445Z] 9068.35 IOPS, 35.42 MiB/s [2024-11-27T07:07:40.445Z] 9179.44 IOPS, 35.86 MiB/s [2024-11-27T07:07:40.445Z] 9387.58 IOPS, 36.67 MiB/s [2024-11-27T07:07:40.445Z] 9566.15 IOPS, 37.37 MiB/s [2024-11-27T07:07:40.445Z] 9720.10 IOPS, 37.97 MiB/s [2024-11-27T07:07:40.445Z] 9772.82 IOPS, 38.18 MiB/s [2024-11-27T07:07:40.445Z] 9827.57 IOPS, 38.39 MiB/s [2024-11-27T07:07:40.445Z] 9908.54 IOPS, 38.71 MiB/s [2024-11-27T07:07:40.445Z] 10031.28 IOPS, 39.18 MiB/s [2024-11-27T07:07:40.445Z] 10151.50 IOPS, 39.65 MiB/s [2024-11-27T07:07:40.445Z] [2024-11-27 08:07:37.640452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.336 [2024-11-27 08:07:37.640494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:37.640528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.336 [2024-11-27 08:07:37.640537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:37.640551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.336 [2024-11-27 08:07:37.640559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:37.640572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.336 [2024-11-27 08:07:37.640579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:37.641783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.336 [2024-11-27 08:07:37.641805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:37.641823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.336 [2024-11-27 08:07:37.641832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:37.641845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.336 [2024-11-27 08:07:37.641852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:37.641865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.336 [2024-11-27 08:07:37.641872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.336 [2024-11-27 08:07:37.641885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.336 [2024-11-27 08:07:37.641894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.641907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.337 [2024-11-27 08:07:37.641919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.641932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.337 [2024-11-27 08:07:37.641939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.641957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.641966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.641980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.641988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.642000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.642008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.642021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.642029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.642042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.642049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.642061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.642068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.642081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.642090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.642103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.642110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.337 [2024-11-27 08:07:37.643539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.337 [2024-11-27 08:07:37.643546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.337 10211.00 IOPS, 39.89 MiB/s [2024-11-27T07:07:40.446Z] 10239.14 IOPS, 40.00 MiB/s [2024-11-27T07:07:40.446Z] Received shutdown signal, test time was about 28.621525 seconds 00:24:46.337 00:24:46.337 Latency(us) 00:24:46.337 [2024-11-27T07:07:40.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.337 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:46.337 Verification LBA range: start 0x0 length 0x4000 00:24:46.337 Nvme0n1 : 28.62 10245.96 40.02 0.00 0.00 12469.86 443.44 3092843.30 00:24:46.337 [2024-11-27T07:07:40.446Z] =================================================================================================================== 00:24:46.337 [2024-11-27T07:07:40.446Z] Total : 10245.96 40.02 0.00 0.00 12469.86 443.44 3092843.30 00:24:46.338 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:46.338 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:46.338 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:46.338 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:46.338 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:46.338 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:46.338 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:46.338 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:46.338 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:46.338 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:46.338 rmmod nvme_tcp 00:24:46.338 rmmod nvme_fabrics 00:24:46.338 rmmod nvme_keyring 00:24:46.338 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:46.338 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:46.338 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:46.338 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2557966 ']' 00:24:46.338 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2557966 00:24:46.338 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2557966 ']' 00:24:46.338 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2557966 00:24:46.338 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2557966 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2557966' 00:24:46.596 killing process with pid 2557966 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2557966 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2557966 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.596 08:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:49.129 00:24:49.129 real 0m39.551s 00:24:49.129 user 1m48.645s 00:24:49.129 sys 0m10.824s 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:49.129 ************************************ 00:24:49.129 END TEST nvmf_host_multipath_status 00:24:49.129 ************************************ 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.129 ************************************ 00:24:49.129 START TEST nvmf_discovery_remove_ifc 00:24:49.129 ************************************ 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:49.129 * Looking for test storage... 00:24:49.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:49.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.129 --rc genhtml_branch_coverage=1 00:24:49.129 --rc genhtml_function_coverage=1 00:24:49.129 --rc genhtml_legend=1 00:24:49.129 --rc geninfo_all_blocks=1 00:24:49.129 --rc geninfo_unexecuted_blocks=1 00:24:49.129 00:24:49.129 ' 00:24:49.129 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:49.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.130 --rc genhtml_branch_coverage=1 00:24:49.130 --rc genhtml_function_coverage=1 00:24:49.130 --rc genhtml_legend=1 00:24:49.130 --rc geninfo_all_blocks=1 00:24:49.130 --rc geninfo_unexecuted_blocks=1 00:24:49.130 00:24:49.130 ' 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:49.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.130 --rc genhtml_branch_coverage=1 00:24:49.130 --rc genhtml_function_coverage=1 00:24:49.130 --rc genhtml_legend=1 00:24:49.130 --rc geninfo_all_blocks=1 00:24:49.130 --rc geninfo_unexecuted_blocks=1 00:24:49.130 00:24:49.130 ' 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:49.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.130 --rc genhtml_branch_coverage=1 00:24:49.130 --rc genhtml_function_coverage=1 00:24:49.130 --rc genhtml_legend=1 00:24:49.130 --rc geninfo_all_blocks=1 00:24:49.130 --rc geninfo_unexecuted_blocks=1 00:24:49.130 00:24:49.130 ' 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:49.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:49.130 08:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.395 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.395 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:54.395 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:54.395 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:54.395 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:54.395 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:54.395 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:54.395 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:54.396 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:54.396 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:54.396 Found net devices under 0000:86:00.0: cvl_0_0 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:54.396 Found net devices under 0000:86:00.1: cvl_0_1 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:54.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:24:54.396 00:24:54.396 --- 10.0.0.2 ping statistics --- 00:24:54.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.396 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:24:54.396 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:24:54.397 00:24:54.397 --- 10.0.0.1 ping statistics --- 00:24:54.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.397 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2566886 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2566886 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2566886 ']' 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.397 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.656 [2024-11-27 08:07:48.536314] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:24:54.656 [2024-11-27 08:07:48.536364] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.656 [2024-11-27 08:07:48.604813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.656 [2024-11-27 08:07:48.646721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.656 [2024-11-27 08:07:48.646757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.656 [2024-11-27 08:07:48.646765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.656 [2024-11-27 08:07:48.646771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.656 [2024-11-27 08:07:48.646776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.656 [2024-11-27 08:07:48.647305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.656 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.656 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:54.656 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:54.656 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:54.656 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.915 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.915 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:54.915 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.915 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.915 [2024-11-27 08:07:48.791779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.915 [2024-11-27 08:07:48.799956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:54.915 null0 00:24:54.915 [2024-11-27 08:07:48.831942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.915 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.915 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2566917 00:24:54.915 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:54.915 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2566917 /tmp/host.sock 00:24:54.915 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2566917 ']' 00:24:54.915 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:54.915 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.915 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:54.915 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:54.915 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.915 08:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.915 [2024-11-27 08:07:48.901218] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:24:54.915 [2024-11-27 08:07:48.901262] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2566917 ] 00:24:54.915 [2024-11-27 08:07:48.962107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.915 [2024-11-27 08:07:49.005076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.182 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.182 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:55.182 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:55.182 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:55.182 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.182 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:55.182 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.182 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:55.182 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.182 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:55.182 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.182 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:55.182 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.182 08:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:56.121 [2024-11-27 08:07:50.200096] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:56.121 [2024-11-27 08:07:50.200117] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:56.121 [2024-11-27 08:07:50.200136] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:56.380 [2024-11-27 08:07:50.287402] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:56.380 [2024-11-27 08:07:50.429265] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:56.380 [2024-11-27 08:07:50.430061] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2011a50:1 started. 00:24:56.380 [2024-11-27 08:07:50.431490] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:56.380 [2024-11-27 08:07:50.431531] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:56.380 [2024-11-27 08:07:50.431551] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:56.380 [2024-11-27 08:07:50.431563] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:56.380 [2024-11-27 08:07:50.431581] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:56.380 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.380 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:56.380 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:56.380 [2024-11-27 08:07:50.437438] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2011a50 was disconnected and freed. delete nvme_qpair. 00:24:56.380 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.380 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:56.380 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.380 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:56.380 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:56.380 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:56.380 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.380 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:56.380 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:56.639 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:56.639 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:56.639 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:56.639 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.639 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:56.639 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:56.639 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:56.639 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.639 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:56.639 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.639 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:56.639 08:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:57.575 08:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:57.575 08:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.575 08:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:57.575 08:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:57.575 08:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.575 08:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:57.575 08:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:57.575 08:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.575 08:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:57.834 08:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:58.767 08:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:58.767 08:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.767 08:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:58.767 08:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:58.767 08:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.767 08:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:58.768 08:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.768 08:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.768 08:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:58.768 08:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:59.702 08:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:59.702 08:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:59.702 08:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:59.702 08:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.702 08:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:59.702 08:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.702 08:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:59.702 08:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.702 08:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:59.702 08:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:01.079 08:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:01.079 08:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.079 08:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:01.079 08:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.079 08:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:01.079 08:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:01.079 08:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:01.079 08:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.079 08:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:01.079 08:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:02.013 08:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:02.013 08:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.013 08:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:02.013 08:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:02.013 08:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.013 08:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:02.013 08:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:02.013 08:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.013 08:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:02.013 08:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:02.013 [2024-11-27 08:07:55.872987] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:02.013 [2024-11-27 08:07:55.873026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.013 [2024-11-27 08:07:55.873037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.013 [2024-11-27 08:07:55.873046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.013 [2024-11-27 08:07:55.873053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.013 [2024-11-27 08:07:55.873060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.013 [2024-11-27 08:07:55.873067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.013 [2024-11-27 08:07:55.873074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.013 [2024-11-27 08:07:55.873081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.013 [2024-11-27 08:07:55.873088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.013 [2024-11-27 08:07:55.873094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.013 [2024-11-27 08:07:55.873101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(6) to be set 00:25:02.013 [2024-11-27 08:07:55.883009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fee240 (9): Bad file descriptor 00:25:02.013 [2024-11-27 08:07:55.893044] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:02.013 [2024-11-27 08:07:55.893058] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:02.013 [2024-11-27 08:07:55.893063] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:02.013 [2024-11-27 08:07:55.893068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:02.013 [2024-11-27 08:07:55.893091] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:02.995 08:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:02.995 08:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.995 08:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:02.995 08:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:02.995 08:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.995 08:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:02.995 08:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:02.995 [2024-11-27 08:07:56.911973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:02.995 [2024-11-27 08:07:56.912015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fee240 with addr=10.0.0.2, port=4420 00:25:02.995 [2024-11-27 08:07:56.912031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(6) to be set 00:25:02.995 [2024-11-27 08:07:56.912058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fee240 (9): Bad file descriptor 00:25:02.995 [2024-11-27 08:07:56.912468] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:02.995 [2024-11-27 08:07:56.912496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:02.995 [2024-11-27 08:07:56.912506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:02.995 [2024-11-27 08:07:56.912518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:02.995 [2024-11-27 08:07:56.912527] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:02.995 [2024-11-27 08:07:56.912534] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:02.995 [2024-11-27 08:07:56.912540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:02.995 [2024-11-27 08:07:56.912550] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:02.995 [2024-11-27 08:07:56.912556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:02.995 08:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.995 08:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:02.995 08:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:04.003 [2024-11-27 08:07:57.915037] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:04.003 [2024-11-27 08:07:57.915057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:04.003 [2024-11-27 08:07:57.915070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:04.003 [2024-11-27 08:07:57.915077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:04.003 [2024-11-27 08:07:57.915085] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:04.003 [2024-11-27 08:07:57.915091] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:04.003 [2024-11-27 08:07:57.915096] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:04.003 [2024-11-27 08:07:57.915100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:04.003 [2024-11-27 08:07:57.915120] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:04.003 [2024-11-27 08:07:57.915141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.003 [2024-11-27 08:07:57.915154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.003 [2024-11-27 08:07:57.915164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.003 [2024-11-27 08:07:57.915171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.003 [2024-11-27 08:07:57.915178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.003 [2024-11-27 08:07:57.915185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.003 [2024-11-27 08:07:57.915192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.003 [2024-11-27 08:07:57.915199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.003 [2024-11-27 08:07:57.915206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.003 [2024-11-27 08:07:57.915212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.003 [2024-11-27 08:07:57.915219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:04.003 [2024-11-27 08:07:57.915300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdd910 (9): Bad file descriptor 00:25:04.003 [2024-11-27 08:07:57.916312] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:04.003 [2024-11-27 08:07:57.916323] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:04.003 08:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:04.003 08:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:04.003 08:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.003 08:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.003 08:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:04.003 08:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:04.003 08:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:04.003 08:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.003 08:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:04.003 08:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:04.003 08:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:04.003 08:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:04.003 08:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:04.003 08:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.003 08:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:04.003 08:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.003 08:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:04.003 08:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:04.003 08:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:04.003 08:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.003 08:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:04.003 08:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:05.378 08:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:05.378 08:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.378 08:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:05.378 08:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:05.378 08:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.378 08:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:05.378 08:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.378 08:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.378 08:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:05.378 08:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:05.944 [2024-11-27 08:07:59.974482] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:05.944 [2024-11-27 08:07:59.974499] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:05.944 [2024-11-27 08:07:59.974512] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:06.203 [2024-11-27 08:08:00.101921] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:06.203 08:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:06.203 08:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.203 08:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:06.203 08:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.203 08:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:06.203 08:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:06.203 08:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:06.203 08:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.203 08:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:06.203 08:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:06.203 [2024-11-27 08:08:00.202744] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:06.203 [2024-11-27 08:08:00.203407] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x201b4a0:1 started. 00:25:06.203 [2024-11-27 08:08:00.204515] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:06.203 [2024-11-27 08:08:00.204547] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:06.203 [2024-11-27 08:08:00.204565] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:06.203 [2024-11-27 08:08:00.204578] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:06.203 [2024-11-27 08:08:00.204585] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:06.203 [2024-11-27 08:08:00.212592] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x201b4a0 was disconnected and freed. delete nvme_qpair. 00:25:07.140 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:07.140 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:07.140 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.140 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:07.140 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.140 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:07.140 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:07.140 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.140 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:07.140 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:07.140 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2566917 00:25:07.140 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2566917 ']' 00:25:07.140 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2566917 00:25:07.140 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:07.140 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:07.140 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2566917 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2566917' 00:25:07.400 killing process with pid 2566917 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2566917 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2566917 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:07.400 rmmod nvme_tcp 00:25:07.400 rmmod nvme_fabrics 00:25:07.400 rmmod nvme_keyring 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2566886 ']' 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2566886 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2566886 ']' 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2566886 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:07.400 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2566886 00:25:07.659 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:07.659 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:07.659 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2566886' 00:25:07.659 killing process with pid 2566886 00:25:07.659 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2566886 00:25:07.659 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2566886 00:25:07.659 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:07.659 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:07.659 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:07.659 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:07.659 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:07.659 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:07.659 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:07.659 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:07.659 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:07.659 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.659 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.659 08:08:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:10.198 00:25:10.198 real 0m21.015s 00:25:10.198 user 0m26.512s 00:25:10.198 sys 0m5.566s 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:10.198 ************************************ 00:25:10.198 END TEST nvmf_discovery_remove_ifc 00:25:10.198 ************************************ 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.198 ************************************ 00:25:10.198 START TEST nvmf_identify_kernel_target 00:25:10.198 ************************************ 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:10.198 * Looking for test storage... 00:25:10.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:10.198 08:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:10.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.198 --rc genhtml_branch_coverage=1 00:25:10.198 --rc genhtml_function_coverage=1 00:25:10.198 --rc genhtml_legend=1 00:25:10.198 --rc geninfo_all_blocks=1 00:25:10.198 --rc geninfo_unexecuted_blocks=1 00:25:10.198 00:25:10.198 ' 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:10.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.198 --rc genhtml_branch_coverage=1 00:25:10.198 --rc genhtml_function_coverage=1 00:25:10.198 --rc genhtml_legend=1 00:25:10.198 --rc geninfo_all_blocks=1 00:25:10.198 --rc geninfo_unexecuted_blocks=1 00:25:10.198 00:25:10.198 ' 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:10.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.198 --rc genhtml_branch_coverage=1 00:25:10.198 --rc genhtml_function_coverage=1 00:25:10.198 --rc genhtml_legend=1 00:25:10.198 --rc geninfo_all_blocks=1 00:25:10.198 --rc geninfo_unexecuted_blocks=1 00:25:10.198 00:25:10.198 ' 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:10.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.198 --rc genhtml_branch_coverage=1 00:25:10.198 --rc genhtml_function_coverage=1 00:25:10.198 --rc genhtml_legend=1 00:25:10.198 --rc geninfo_all_blocks=1 00:25:10.198 --rc geninfo_unexecuted_blocks=1 00:25:10.198 00:25:10.198 ' 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.198 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:10.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:10.199 08:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:15.477 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:15.477 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:15.477 Found net devices under 0000:86:00.0: cvl_0_0 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:15.477 Found net devices under 0000:86:00.1: cvl_0_1 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:15.477 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:15.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:25:15.478 00:25:15.478 --- 10.0.0.2 ping statistics --- 00:25:15.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.478 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:25:15.478 00:25:15.478 --- 10.0.0.1 ping statistics --- 00:25:15.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.478 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:15.478 08:08:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:18.013 Waiting for block devices as requested 00:25:18.271 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:18.271 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:18.271 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:18.530 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:18.530 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:18.530 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:18.530 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:18.789 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:18.789 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:18.789 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:19.047 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:19.047 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:19.047 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:19.047 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:19.307 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:19.307 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:19.307 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:19.565 No valid GPT data, bailing 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:19.565 00:25:19.565 Discovery Log Number of Records 2, Generation counter 2 00:25:19.565 =====Discovery Log Entry 0====== 00:25:19.565 trtype: tcp 00:25:19.565 adrfam: ipv4 00:25:19.565 subtype: current discovery subsystem 00:25:19.565 treq: not specified, sq flow control disable supported 00:25:19.565 portid: 1 00:25:19.565 trsvcid: 4420 00:25:19.565 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:19.565 traddr: 10.0.0.1 00:25:19.565 eflags: none 00:25:19.565 sectype: none 00:25:19.565 =====Discovery Log Entry 1====== 00:25:19.565 trtype: tcp 00:25:19.565 adrfam: ipv4 00:25:19.565 subtype: nvme subsystem 00:25:19.565 treq: not specified, sq flow control disable supported 00:25:19.565 portid: 1 00:25:19.565 trsvcid: 4420 00:25:19.565 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:19.565 traddr: 10.0.0.1 00:25:19.565 eflags: none 00:25:19.565 sectype: none 00:25:19.565 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:19.565 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:19.825 ===================================================== 00:25:19.825 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:19.825 ===================================================== 00:25:19.825 Controller Capabilities/Features 00:25:19.825 ================================ 00:25:19.825 Vendor ID: 0000 00:25:19.825 Subsystem Vendor ID: 0000 00:25:19.825 Serial Number: c18f445d17f3e7c71495 00:25:19.825 Model Number: Linux 00:25:19.825 Firmware Version: 6.8.9-20 00:25:19.825 Recommended Arb Burst: 0 00:25:19.825 IEEE OUI Identifier: 00 00 00 00:25:19.825 Multi-path I/O 00:25:19.825 May have multiple subsystem ports: No 00:25:19.825 May have multiple controllers: No 00:25:19.825 Associated with SR-IOV VF: No 00:25:19.825 Max Data Transfer Size: Unlimited 00:25:19.825 Max Number of Namespaces: 0 00:25:19.825 Max Number of I/O Queues: 1024 00:25:19.825 NVMe Specification Version (VS): 1.3 00:25:19.825 NVMe Specification Version (Identify): 1.3 00:25:19.825 Maximum Queue Entries: 1024 00:25:19.825 Contiguous Queues Required: No 00:25:19.825 Arbitration Mechanisms Supported 00:25:19.825 Weighted Round Robin: Not Supported 00:25:19.825 Vendor Specific: Not Supported 00:25:19.825 Reset Timeout: 7500 ms 00:25:19.825 Doorbell Stride: 4 bytes 00:25:19.825 NVM Subsystem Reset: Not Supported 00:25:19.825 Command Sets Supported 00:25:19.825 NVM Command Set: Supported 00:25:19.825 Boot Partition: Not Supported 00:25:19.825 Memory Page Size Minimum: 4096 bytes 00:25:19.825 Memory Page Size Maximum: 4096 bytes 00:25:19.825 Persistent Memory Region: Not Supported 00:25:19.825 Optional Asynchronous Events Supported 00:25:19.825 Namespace Attribute Notices: Not Supported 00:25:19.825 Firmware Activation Notices: Not Supported 00:25:19.825 ANA Change Notices: Not Supported 00:25:19.825 PLE Aggregate Log Change Notices: Not Supported 00:25:19.825 LBA Status Info Alert Notices: Not Supported 00:25:19.825 EGE Aggregate Log Change Notices: Not Supported 00:25:19.825 Normal NVM Subsystem Shutdown event: Not Supported 00:25:19.825 Zone Descriptor Change Notices: Not Supported 00:25:19.825 Discovery Log Change Notices: Supported 00:25:19.825 Controller Attributes 00:25:19.825 128-bit Host Identifier: Not Supported 00:25:19.825 Non-Operational Permissive Mode: Not Supported 00:25:19.825 NVM Sets: Not Supported 00:25:19.825 Read Recovery Levels: Not Supported 00:25:19.825 Endurance Groups: Not Supported 00:25:19.825 Predictable Latency Mode: Not Supported 00:25:19.825 Traffic Based Keep ALive: Not Supported 00:25:19.825 Namespace Granularity: Not Supported 00:25:19.825 SQ Associations: Not Supported 00:25:19.825 UUID List: Not Supported 00:25:19.825 Multi-Domain Subsystem: Not Supported 00:25:19.825 Fixed Capacity Management: Not Supported 00:25:19.825 Variable Capacity Management: Not Supported 00:25:19.825 Delete Endurance Group: Not Supported 00:25:19.825 Delete NVM Set: Not Supported 00:25:19.825 Extended LBA Formats Supported: Not Supported 00:25:19.825 Flexible Data Placement Supported: Not Supported 00:25:19.825 00:25:19.825 Controller Memory Buffer Support 00:25:19.825 ================================ 00:25:19.825 Supported: No 00:25:19.825 00:25:19.825 Persistent Memory Region Support 00:25:19.825 ================================ 00:25:19.825 Supported: No 00:25:19.825 00:25:19.825 Admin Command Set Attributes 00:25:19.825 ============================ 00:25:19.825 Security Send/Receive: Not Supported 00:25:19.825 Format NVM: Not Supported 00:25:19.825 Firmware Activate/Download: Not Supported 00:25:19.825 Namespace Management: Not Supported 00:25:19.825 Device Self-Test: Not Supported 00:25:19.825 Directives: Not Supported 00:25:19.825 NVMe-MI: Not Supported 00:25:19.825 Virtualization Management: Not Supported 00:25:19.825 Doorbell Buffer Config: Not Supported 00:25:19.825 Get LBA Status Capability: Not Supported 00:25:19.825 Command & Feature Lockdown Capability: Not Supported 00:25:19.825 Abort Command Limit: 1 00:25:19.825 Async Event Request Limit: 1 00:25:19.825 Number of Firmware Slots: N/A 00:25:19.825 Firmware Slot 1 Read-Only: N/A 00:25:19.825 Firmware Activation Without Reset: N/A 00:25:19.825 Multiple Update Detection Support: N/A 00:25:19.825 Firmware Update Granularity: No Information Provided 00:25:19.825 Per-Namespace SMART Log: No 00:25:19.825 Asymmetric Namespace Access Log Page: Not Supported 00:25:19.825 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:19.825 Command Effects Log Page: Not Supported 00:25:19.826 Get Log Page Extended Data: Supported 00:25:19.826 Telemetry Log Pages: Not Supported 00:25:19.826 Persistent Event Log Pages: Not Supported 00:25:19.826 Supported Log Pages Log Page: May Support 00:25:19.826 Commands Supported & Effects Log Page: Not Supported 00:25:19.826 Feature Identifiers & Effects Log Page:May Support 00:25:19.826 NVMe-MI Commands & Effects Log Page: May Support 00:25:19.826 Data Area 4 for Telemetry Log: Not Supported 00:25:19.826 Error Log Page Entries Supported: 1 00:25:19.826 Keep Alive: Not Supported 00:25:19.826 00:25:19.826 NVM Command Set Attributes 00:25:19.826 ========================== 00:25:19.826 Submission Queue Entry Size 00:25:19.826 Max: 1 00:25:19.826 Min: 1 00:25:19.826 Completion Queue Entry Size 00:25:19.826 Max: 1 00:25:19.826 Min: 1 00:25:19.826 Number of Namespaces: 0 00:25:19.826 Compare Command: Not Supported 00:25:19.826 Write Uncorrectable Command: Not Supported 00:25:19.826 Dataset Management Command: Not Supported 00:25:19.826 Write Zeroes Command: Not Supported 00:25:19.826 Set Features Save Field: Not Supported 00:25:19.826 Reservations: Not Supported 00:25:19.826 Timestamp: Not Supported 00:25:19.826 Copy: Not Supported 00:25:19.826 Volatile Write Cache: Not Present 00:25:19.826 Atomic Write Unit (Normal): 1 00:25:19.826 Atomic Write Unit (PFail): 1 00:25:19.826 Atomic Compare & Write Unit: 1 00:25:19.826 Fused Compare & Write: Not Supported 00:25:19.826 Scatter-Gather List 00:25:19.826 SGL Command Set: Supported 00:25:19.826 SGL Keyed: Not Supported 00:25:19.826 SGL Bit Bucket Descriptor: Not Supported 00:25:19.826 SGL Metadata Pointer: Not Supported 00:25:19.826 Oversized SGL: Not Supported 00:25:19.826 SGL Metadata Address: Not Supported 00:25:19.826 SGL Offset: Supported 00:25:19.826 Transport SGL Data Block: Not Supported 00:25:19.826 Replay Protected Memory Block: Not Supported 00:25:19.826 00:25:19.826 Firmware Slot Information 00:25:19.826 ========================= 00:25:19.826 Active slot: 0 00:25:19.826 00:25:19.826 00:25:19.826 Error Log 00:25:19.826 ========= 00:25:19.826 00:25:19.826 Active Namespaces 00:25:19.826 ================= 00:25:19.826 Discovery Log Page 00:25:19.826 ================== 00:25:19.826 Generation Counter: 2 00:25:19.826 Number of Records: 2 00:25:19.826 Record Format: 0 00:25:19.826 00:25:19.826 Discovery Log Entry 0 00:25:19.826 ---------------------- 00:25:19.826 Transport Type: 3 (TCP) 00:25:19.826 Address Family: 1 (IPv4) 00:25:19.826 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:19.826 Entry Flags: 00:25:19.826 Duplicate Returned Information: 0 00:25:19.826 Explicit Persistent Connection Support for Discovery: 0 00:25:19.826 Transport Requirements: 00:25:19.826 Secure Channel: Not Specified 00:25:19.826 Port ID: 1 (0x0001) 00:25:19.826 Controller ID: 65535 (0xffff) 00:25:19.826 Admin Max SQ Size: 32 00:25:19.826 Transport Service Identifier: 4420 00:25:19.826 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:19.826 Transport Address: 10.0.0.1 00:25:19.826 Discovery Log Entry 1 00:25:19.826 ---------------------- 00:25:19.826 Transport Type: 3 (TCP) 00:25:19.826 Address Family: 1 (IPv4) 00:25:19.826 Subsystem Type: 2 (NVM Subsystem) 00:25:19.826 Entry Flags: 00:25:19.826 Duplicate Returned Information: 0 00:25:19.826 Explicit Persistent Connection Support for Discovery: 0 00:25:19.826 Transport Requirements: 00:25:19.826 Secure Channel: Not Specified 00:25:19.826 Port ID: 1 (0x0001) 00:25:19.826 Controller ID: 65535 (0xffff) 00:25:19.826 Admin Max SQ Size: 32 00:25:19.826 Transport Service Identifier: 4420 00:25:19.826 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:19.826 Transport Address: 10.0.0.1 00:25:19.826 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:19.826 get_feature(0x01) failed 00:25:19.826 get_feature(0x02) failed 00:25:19.826 get_feature(0x04) failed 00:25:19.826 ===================================================== 00:25:19.826 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:19.826 ===================================================== 00:25:19.826 Controller Capabilities/Features 00:25:19.826 ================================ 00:25:19.826 Vendor ID: 0000 00:25:19.826 Subsystem Vendor ID: 0000 00:25:19.826 Serial Number: 6069d2dc6455a78aa19a 00:25:19.826 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:19.826 Firmware Version: 6.8.9-20 00:25:19.826 Recommended Arb Burst: 6 00:25:19.826 IEEE OUI Identifier: 00 00 00 00:25:19.826 Multi-path I/O 00:25:19.826 May have multiple subsystem ports: Yes 00:25:19.826 May have multiple controllers: Yes 00:25:19.826 Associated with SR-IOV VF: No 00:25:19.826 Max Data Transfer Size: Unlimited 00:25:19.826 Max Number of Namespaces: 1024 00:25:19.826 Max Number of I/O Queues: 128 00:25:19.826 NVMe Specification Version (VS): 1.3 00:25:19.826 NVMe Specification Version (Identify): 1.3 00:25:19.826 Maximum Queue Entries: 1024 00:25:19.826 Contiguous Queues Required: No 00:25:19.826 Arbitration Mechanisms Supported 00:25:19.826 Weighted Round Robin: Not Supported 00:25:19.826 Vendor Specific: Not Supported 00:25:19.826 Reset Timeout: 7500 ms 00:25:19.826 Doorbell Stride: 4 bytes 00:25:19.826 NVM Subsystem Reset: Not Supported 00:25:19.826 Command Sets Supported 00:25:19.826 NVM Command Set: Supported 00:25:19.826 Boot Partition: Not Supported 00:25:19.826 Memory Page Size Minimum: 4096 bytes 00:25:19.826 Memory Page Size Maximum: 4096 bytes 00:25:19.826 Persistent Memory Region: Not Supported 00:25:19.826 Optional Asynchronous Events Supported 00:25:19.826 Namespace Attribute Notices: Supported 00:25:19.826 Firmware Activation Notices: Not Supported 00:25:19.826 ANA Change Notices: Supported 00:25:19.826 PLE Aggregate Log Change Notices: Not Supported 00:25:19.826 LBA Status Info Alert Notices: Not Supported 00:25:19.826 EGE Aggregate Log Change Notices: Not Supported 00:25:19.826 Normal NVM Subsystem Shutdown event: Not Supported 00:25:19.826 Zone Descriptor Change Notices: Not Supported 00:25:19.826 Discovery Log Change Notices: Not Supported 00:25:19.826 Controller Attributes 00:25:19.826 128-bit Host Identifier: Supported 00:25:19.826 Non-Operational Permissive Mode: Not Supported 00:25:19.826 NVM Sets: Not Supported 00:25:19.826 Read Recovery Levels: Not Supported 00:25:19.826 Endurance Groups: Not Supported 00:25:19.826 Predictable Latency Mode: Not Supported 00:25:19.826 Traffic Based Keep ALive: Supported 00:25:19.826 Namespace Granularity: Not Supported 00:25:19.826 SQ Associations: Not Supported 00:25:19.826 UUID List: Not Supported 00:25:19.826 Multi-Domain Subsystem: Not Supported 00:25:19.826 Fixed Capacity Management: Not Supported 00:25:19.826 Variable Capacity Management: Not Supported 00:25:19.826 Delete Endurance Group: Not Supported 00:25:19.826 Delete NVM Set: Not Supported 00:25:19.826 Extended LBA Formats Supported: Not Supported 00:25:19.826 Flexible Data Placement Supported: Not Supported 00:25:19.826 00:25:19.826 Controller Memory Buffer Support 00:25:19.826 ================================ 00:25:19.826 Supported: No 00:25:19.826 00:25:19.826 Persistent Memory Region Support 00:25:19.826 ================================ 00:25:19.826 Supported: No 00:25:19.826 00:25:19.826 Admin Command Set Attributes 00:25:19.826 ============================ 00:25:19.826 Security Send/Receive: Not Supported 00:25:19.826 Format NVM: Not Supported 00:25:19.826 Firmware Activate/Download: Not Supported 00:25:19.826 Namespace Management: Not Supported 00:25:19.826 Device Self-Test: Not Supported 00:25:19.826 Directives: Not Supported 00:25:19.826 NVMe-MI: Not Supported 00:25:19.826 Virtualization Management: Not Supported 00:25:19.826 Doorbell Buffer Config: Not Supported 00:25:19.826 Get LBA Status Capability: Not Supported 00:25:19.826 Command & Feature Lockdown Capability: Not Supported 00:25:19.826 Abort Command Limit: 4 00:25:19.826 Async Event Request Limit: 4 00:25:19.826 Number of Firmware Slots: N/A 00:25:19.826 Firmware Slot 1 Read-Only: N/A 00:25:19.826 Firmware Activation Without Reset: N/A 00:25:19.826 Multiple Update Detection Support: N/A 00:25:19.826 Firmware Update Granularity: No Information Provided 00:25:19.826 Per-Namespace SMART Log: Yes 00:25:19.826 Asymmetric Namespace Access Log Page: Supported 00:25:19.826 ANA Transition Time : 10 sec 00:25:19.826 00:25:19.826 Asymmetric Namespace Access Capabilities 00:25:19.827 ANA Optimized State : Supported 00:25:19.827 ANA Non-Optimized State : Supported 00:25:19.827 ANA Inaccessible State : Supported 00:25:19.827 ANA Persistent Loss State : Supported 00:25:19.827 ANA Change State : Supported 00:25:19.827 ANAGRPID is not changed : No 00:25:19.827 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:19.827 00:25:19.827 ANA Group Identifier Maximum : 128 00:25:19.827 Number of ANA Group Identifiers : 128 00:25:19.827 Max Number of Allowed Namespaces : 1024 00:25:19.827 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:19.827 Command Effects Log Page: Supported 00:25:19.827 Get Log Page Extended Data: Supported 00:25:19.827 Telemetry Log Pages: Not Supported 00:25:19.827 Persistent Event Log Pages: Not Supported 00:25:19.827 Supported Log Pages Log Page: May Support 00:25:19.827 Commands Supported & Effects Log Page: Not Supported 00:25:19.827 Feature Identifiers & Effects Log Page:May Support 00:25:19.827 NVMe-MI Commands & Effects Log Page: May Support 00:25:19.827 Data Area 4 for Telemetry Log: Not Supported 00:25:19.827 Error Log Page Entries Supported: 128 00:25:19.827 Keep Alive: Supported 00:25:19.827 Keep Alive Granularity: 1000 ms 00:25:19.827 00:25:19.827 NVM Command Set Attributes 00:25:19.827 ========================== 00:25:19.827 Submission Queue Entry Size 00:25:19.827 Max: 64 00:25:19.827 Min: 64 00:25:19.827 Completion Queue Entry Size 00:25:19.827 Max: 16 00:25:19.827 Min: 16 00:25:19.827 Number of Namespaces: 1024 00:25:19.827 Compare Command: Not Supported 00:25:19.827 Write Uncorrectable Command: Not Supported 00:25:19.827 Dataset Management Command: Supported 00:25:19.827 Write Zeroes Command: Supported 00:25:19.827 Set Features Save Field: Not Supported 00:25:19.827 Reservations: Not Supported 00:25:19.827 Timestamp: Not Supported 00:25:19.827 Copy: Not Supported 00:25:19.827 Volatile Write Cache: Present 00:25:19.827 Atomic Write Unit (Normal): 1 00:25:19.827 Atomic Write Unit (PFail): 1 00:25:19.827 Atomic Compare & Write Unit: 1 00:25:19.827 Fused Compare & Write: Not Supported 00:25:19.827 Scatter-Gather List 00:25:19.827 SGL Command Set: Supported 00:25:19.827 SGL Keyed: Not Supported 00:25:19.827 SGL Bit Bucket Descriptor: Not Supported 00:25:19.827 SGL Metadata Pointer: Not Supported 00:25:19.827 Oversized SGL: Not Supported 00:25:19.827 SGL Metadata Address: Not Supported 00:25:19.827 SGL Offset: Supported 00:25:19.827 Transport SGL Data Block: Not Supported 00:25:19.827 Replay Protected Memory Block: Not Supported 00:25:19.827 00:25:19.827 Firmware Slot Information 00:25:19.827 ========================= 00:25:19.827 Active slot: 0 00:25:19.827 00:25:19.827 Asymmetric Namespace Access 00:25:19.827 =========================== 00:25:19.827 Change Count : 0 00:25:19.827 Number of ANA Group Descriptors : 1 00:25:19.827 ANA Group Descriptor : 0 00:25:19.827 ANA Group ID : 1 00:25:19.827 Number of NSID Values : 1 00:25:19.827 Change Count : 0 00:25:19.827 ANA State : 1 00:25:19.827 Namespace Identifier : 1 00:25:19.827 00:25:19.827 Commands Supported and Effects 00:25:19.827 ============================== 00:25:19.827 Admin Commands 00:25:19.827 -------------- 00:25:19.827 Get Log Page (02h): Supported 00:25:19.827 Identify (06h): Supported 00:25:19.827 Abort (08h): Supported 00:25:19.827 Set Features (09h): Supported 00:25:19.827 Get Features (0Ah): Supported 00:25:19.827 Asynchronous Event Request (0Ch): Supported 00:25:19.827 Keep Alive (18h): Supported 00:25:19.827 I/O Commands 00:25:19.827 ------------ 00:25:19.827 Flush (00h): Supported 00:25:19.827 Write (01h): Supported LBA-Change 00:25:19.827 Read (02h): Supported 00:25:19.827 Write Zeroes (08h): Supported LBA-Change 00:25:19.827 Dataset Management (09h): Supported 00:25:19.827 00:25:19.827 Error Log 00:25:19.827 ========= 00:25:19.827 Entry: 0 00:25:19.827 Error Count: 0x3 00:25:19.827 Submission Queue Id: 0x0 00:25:19.827 Command Id: 0x5 00:25:19.827 Phase Bit: 0 00:25:19.827 Status Code: 0x2 00:25:19.827 Status Code Type: 0x0 00:25:19.827 Do Not Retry: 1 00:25:19.827 Error Location: 0x28 00:25:19.827 LBA: 0x0 00:25:19.827 Namespace: 0x0 00:25:19.827 Vendor Log Page: 0x0 00:25:19.827 ----------- 00:25:19.827 Entry: 1 00:25:19.827 Error Count: 0x2 00:25:19.827 Submission Queue Id: 0x0 00:25:19.827 Command Id: 0x5 00:25:19.827 Phase Bit: 0 00:25:19.827 Status Code: 0x2 00:25:19.827 Status Code Type: 0x0 00:25:19.827 Do Not Retry: 1 00:25:19.827 Error Location: 0x28 00:25:19.827 LBA: 0x0 00:25:19.827 Namespace: 0x0 00:25:19.827 Vendor Log Page: 0x0 00:25:19.827 ----------- 00:25:19.827 Entry: 2 00:25:19.827 Error Count: 0x1 00:25:19.827 Submission Queue Id: 0x0 00:25:19.827 Command Id: 0x4 00:25:19.827 Phase Bit: 0 00:25:19.827 Status Code: 0x2 00:25:19.827 Status Code Type: 0x0 00:25:19.827 Do Not Retry: 1 00:25:19.827 Error Location: 0x28 00:25:19.827 LBA: 0x0 00:25:19.827 Namespace: 0x0 00:25:19.827 Vendor Log Page: 0x0 00:25:19.827 00:25:19.827 Number of Queues 00:25:19.827 ================ 00:25:19.827 Number of I/O Submission Queues: 128 00:25:19.827 Number of I/O Completion Queues: 128 00:25:19.827 00:25:19.827 ZNS Specific Controller Data 00:25:19.827 ============================ 00:25:19.827 Zone Append Size Limit: 0 00:25:19.827 00:25:19.827 00:25:19.827 Active Namespaces 00:25:19.827 ================= 00:25:19.827 get_feature(0x05) failed 00:25:19.827 Namespace ID:1 00:25:19.827 Command Set Identifier: NVM (00h) 00:25:19.827 Deallocate: Supported 00:25:19.827 Deallocated/Unwritten Error: Not Supported 00:25:19.827 Deallocated Read Value: Unknown 00:25:19.827 Deallocate in Write Zeroes: Not Supported 00:25:19.827 Deallocated Guard Field: 0xFFFF 00:25:19.827 Flush: Supported 00:25:19.827 Reservation: Not Supported 00:25:19.827 Namespace Sharing Capabilities: Multiple Controllers 00:25:19.827 Size (in LBAs): 1953525168 (931GiB) 00:25:19.827 Capacity (in LBAs): 1953525168 (931GiB) 00:25:19.827 Utilization (in LBAs): 1953525168 (931GiB) 00:25:19.827 UUID: f1165def-892c-481b-aa04-09bbb989e49d 00:25:19.827 Thin Provisioning: Not Supported 00:25:19.827 Per-NS Atomic Units: Yes 00:25:19.827 Atomic Boundary Size (Normal): 0 00:25:19.827 Atomic Boundary Size (PFail): 0 00:25:19.827 Atomic Boundary Offset: 0 00:25:19.827 NGUID/EUI64 Never Reused: No 00:25:19.827 ANA group ID: 1 00:25:19.827 Namespace Write Protected: No 00:25:19.827 Number of LBA Formats: 1 00:25:19.827 Current LBA Format: LBA Format #00 00:25:19.827 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:19.827 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:19.827 rmmod nvme_tcp 00:25:19.827 rmmod nvme_fabrics 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.827 08:08:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.363 08:08:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:22.363 08:08:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:22.363 08:08:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:22.363 08:08:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:22.363 08:08:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:22.363 08:08:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:22.363 08:08:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:22.363 08:08:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:22.363 08:08:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:22.363 08:08:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:22.363 08:08:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:24.897 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:24.897 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:24.897 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:24.897 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:24.897 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:24.897 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:24.897 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:24.897 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:24.897 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:24.897 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:24.897 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:24.897 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:24.897 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:24.897 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:24.897 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:24.897 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:25.832 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:25.832 00:25:25.832 real 0m15.892s 00:25:25.832 user 0m4.039s 00:25:25.832 sys 0m8.258s 00:25:25.832 08:08:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:25.832 08:08:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:25.832 ************************************ 00:25:25.832 END TEST nvmf_identify_kernel_target 00:25:25.832 ************************************ 00:25:25.832 08:08:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:25.832 08:08:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:25.832 08:08:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:25.832 08:08:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.832 ************************************ 00:25:25.832 START TEST nvmf_auth_host 00:25:25.832 ************************************ 00:25:25.832 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:25.832 * Looking for test storage... 00:25:25.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:25.832 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:25.832 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:25.832 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:26.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.090 --rc genhtml_branch_coverage=1 00:25:26.090 --rc genhtml_function_coverage=1 00:25:26.090 --rc genhtml_legend=1 00:25:26.090 --rc geninfo_all_blocks=1 00:25:26.090 --rc geninfo_unexecuted_blocks=1 00:25:26.090 00:25:26.090 ' 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:26.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.090 --rc genhtml_branch_coverage=1 00:25:26.090 --rc genhtml_function_coverage=1 00:25:26.090 --rc genhtml_legend=1 00:25:26.090 --rc geninfo_all_blocks=1 00:25:26.090 --rc geninfo_unexecuted_blocks=1 00:25:26.090 00:25:26.090 ' 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:26.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.090 --rc genhtml_branch_coverage=1 00:25:26.090 --rc genhtml_function_coverage=1 00:25:26.090 --rc genhtml_legend=1 00:25:26.090 --rc geninfo_all_blocks=1 00:25:26.090 --rc geninfo_unexecuted_blocks=1 00:25:26.090 00:25:26.090 ' 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:26.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.090 --rc genhtml_branch_coverage=1 00:25:26.090 --rc genhtml_function_coverage=1 00:25:26.090 --rc genhtml_legend=1 00:25:26.090 --rc geninfo_all_blocks=1 00:25:26.090 --rc geninfo_unexecuted_blocks=1 00:25:26.090 00:25:26.090 ' 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:26.090 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.091 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.091 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.091 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.091 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.091 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:26.091 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.091 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.091 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.091 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.091 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.091 08:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:26.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:26.091 08:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:31.362 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:31.362 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:31.362 Found net devices under 0000:86:00.0: cvl_0_0 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:31.362 Found net devices under 0000:86:00.1: cvl_0_1 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.362 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.620 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.620 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.620 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:31.620 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.620 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:31.620 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:31.620 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:31.620 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:31.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:31.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:25:31.620 00:25:31.620 --- 10.0.0.2 ping statistics --- 00:25:31.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.620 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:25:31.620 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:31.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:31.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:25:31.620 00:25:31.620 --- 10.0.0.1 ping statistics --- 00:25:31.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.620 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:25:31.620 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:31.620 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:31.620 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:31.620 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.621 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:31.621 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:31.621 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.621 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:31.621 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:31.621 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:31.621 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:31.621 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:31.621 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.621 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2578868 00:25:31.621 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:31.621 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2578868 00:25:31.621 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2578868 ']' 00:25:31.621 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.621 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:31.621 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.621 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:31.621 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c5cf2195f4709f20cb4f8e902b7b6434 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.MeL 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c5cf2195f4709f20cb4f8e902b7b6434 0 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c5cf2195f4709f20cb4f8e902b7b6434 0 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c5cf2195f4709f20cb4f8e902b7b6434 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.MeL 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.MeL 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.MeL 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:31.879 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b044579ae2b21bf6473c0583dd822ff603459490df24b3988ad87424bf9da894 00:25:32.138 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:32.138 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.vxX 00:25:32.138 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b044579ae2b21bf6473c0583dd822ff603459490df24b3988ad87424bf9da894 3 00:25:32.138 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b044579ae2b21bf6473c0583dd822ff603459490df24b3988ad87424bf9da894 3 00:25:32.138 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:32.138 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:32.138 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b044579ae2b21bf6473c0583dd822ff603459490df24b3988ad87424bf9da894 00:25:32.138 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:32.138 08:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.vxX 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.vxX 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.vxX 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a9b625e99450cdb10d5f97c71053e6a88371064afd15cc69 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wxP 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a9b625e99450cdb10d5f97c71053e6a88371064afd15cc69 0 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a9b625e99450cdb10d5f97c71053e6a88371064afd15cc69 0 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a9b625e99450cdb10d5f97c71053e6a88371064afd15cc69 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wxP 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wxP 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.wxP 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a62591dd3e20a753738fdb7c19403c98053ef8bdfd52cc1e 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yWG 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a62591dd3e20a753738fdb7c19403c98053ef8bdfd52cc1e 2 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a62591dd3e20a753738fdb7c19403c98053ef8bdfd52cc1e 2 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a62591dd3e20a753738fdb7c19403c98053ef8bdfd52cc1e 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:32.138 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yWG 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yWG 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.yWG 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ae09ddf7ea8f38ac49779dfd647e2452 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.B8j 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ae09ddf7ea8f38ac49779dfd647e2452 1 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ae09ddf7ea8f38ac49779dfd647e2452 1 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ae09ddf7ea8f38ac49779dfd647e2452 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.B8j 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.B8j 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.B8j 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=77651b8b0783db452f079a5def295fb5 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.gjt 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 77651b8b0783db452f079a5def295fb5 1 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 77651b8b0783db452f079a5def295fb5 1 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=77651b8b0783db452f079a5def295fb5 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:32.139 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.gjt 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.gjt 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.gjt 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1ba8b7e660050e29393cf213a869f7d8170414039ce606cb 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.3tM 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1ba8b7e660050e29393cf213a869f7d8170414039ce606cb 2 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1ba8b7e660050e29393cf213a869f7d8170414039ce606cb 2 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1ba8b7e660050e29393cf213a869f7d8170414039ce606cb 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.3tM 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.3tM 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.3tM 00:25:32.397 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8a4fb73f38e37b00c42aceeab854950f 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.XSx 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8a4fb73f38e37b00c42aceeab854950f 0 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8a4fb73f38e37b00c42aceeab854950f 0 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8a4fb73f38e37b00c42aceeab854950f 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.XSx 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.XSx 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.XSx 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=72638c06836dbdf6bbaeb1e705506b525f6874b1dc990e7764f1ad78fdf55c3f 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.vss 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 72638c06836dbdf6bbaeb1e705506b525f6874b1dc990e7764f1ad78fdf55c3f 3 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 72638c06836dbdf6bbaeb1e705506b525f6874b1dc990e7764f1ad78fdf55c3f 3 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=72638c06836dbdf6bbaeb1e705506b525f6874b1dc990e7764f1ad78fdf55c3f 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.vss 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.vss 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.vss 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2578868 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2578868 ']' 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.398 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.MeL 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.vxX ]] 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vxX 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.wxP 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.yWG ]] 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yWG 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.B8j 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.gjt ]] 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gjt 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.3tM 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.XSx ]] 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.XSx 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.vss 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:32.657 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:32.658 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:32.658 08:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:35.188 Waiting for block devices as requested 00:25:35.446 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:35.446 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:35.446 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:35.704 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:35.704 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:35.704 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:35.704 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:35.962 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:35.962 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:35.962 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:35.962 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:36.220 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:36.220 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:36.220 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:36.478 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:36.478 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:36.478 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:37.045 No valid GPT data, bailing 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:37.045 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:37.304 00:25:37.304 Discovery Log Number of Records 2, Generation counter 2 00:25:37.304 =====Discovery Log Entry 0====== 00:25:37.304 trtype: tcp 00:25:37.304 adrfam: ipv4 00:25:37.304 subtype: current discovery subsystem 00:25:37.304 treq: not specified, sq flow control disable supported 00:25:37.304 portid: 1 00:25:37.304 trsvcid: 4420 00:25:37.304 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:37.304 traddr: 10.0.0.1 00:25:37.304 eflags: none 00:25:37.304 sectype: none 00:25:37.304 =====Discovery Log Entry 1====== 00:25:37.304 trtype: tcp 00:25:37.304 adrfam: ipv4 00:25:37.304 subtype: nvme subsystem 00:25:37.304 treq: not specified, sq flow control disable supported 00:25:37.304 portid: 1 00:25:37.304 trsvcid: 4420 00:25:37.304 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:37.304 traddr: 10.0.0.1 00:25:37.304 eflags: none 00:25:37.304 sectype: none 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: ]] 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.304 nvme0n1 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.304 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: ]] 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.563 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.564 nvme0n1 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.564 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.822 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.822 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: ]] 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.823 nvme0n1 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: ]] 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.823 08:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.082 nvme0n1 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: ]] 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.082 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.341 nvme0n1 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.341 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.600 nvme0n1 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: ]] 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.600 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.859 nvme0n1 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: ]] 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:38.859 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.860 08:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.119 nvme0n1 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: ]] 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.119 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.120 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.120 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.378 nvme0n1 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: ]] 00:25:39.378 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.379 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.637 nvme0n1 00:25:39.637 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.637 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.637 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.637 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.637 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.637 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.637 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.637 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.637 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.637 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.637 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.637 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.637 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:39.637 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.637 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.637 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:39.637 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.638 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.898 nvme0n1 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: ]] 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.898 08:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.158 nvme0n1 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: ]] 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.158 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.159 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.159 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.159 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.159 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.159 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.159 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.159 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.159 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.159 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.159 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.159 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.159 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.159 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.159 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.418 nvme0n1 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: ]] 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.418 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.678 nvme0n1 00:25:40.678 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.678 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.678 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.678 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.678 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.678 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: ]] 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.937 08:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.196 nvme0n1 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.196 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.455 nvme0n1 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: ]] 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.456 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.023 nvme0n1 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: ]] 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.023 08:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.282 nvme0n1 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: ]] 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.282 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.850 nvme0n1 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: ]] 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.850 08:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.414 nvme0n1 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.414 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.672 nvme0n1 00:25:43.672 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.672 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.672 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.672 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.672 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.672 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.672 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.672 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.672 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.672 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.672 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.672 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.672 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.672 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:43.672 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.672 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.672 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:43.672 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: ]] 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.673 08:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.239 nvme0n1 00:25:44.239 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.239 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.239 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.239 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.239 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.239 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: ]] 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.497 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.498 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.498 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.498 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.498 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.498 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.498 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.498 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.498 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.498 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.498 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.498 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.498 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.498 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.064 nvme0n1 00:25:45.064 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.064 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.064 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.064 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.064 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.064 08:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.064 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.064 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.064 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.064 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.064 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.064 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.064 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:45.064 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.064 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.064 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:45.064 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:45.064 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:45.064 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:45.064 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.064 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:45.064 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:45.064 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: ]] 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.065 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.630 nvme0n1 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: ]] 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.630 08:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.196 nvme0n1 00:25:46.196 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.196 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.196 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.196 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.196 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.454 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.020 nvme0n1 00:25:47.020 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.020 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.021 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.021 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.021 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.021 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.021 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.021 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.021 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.021 08:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: ]] 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.021 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.280 nvme0n1 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: ]] 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.280 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.539 nvme0n1 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: ]] 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.539 nvme0n1 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.539 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: ]] 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:47.797 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.798 nvme0n1 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.798 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.056 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.057 08:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.057 nvme0n1 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: ]] 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.057 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.316 nvme0n1 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: ]] 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.316 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.575 nvme0n1 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: ]] 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.575 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.834 nvme0n1 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: ]] 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.834 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.835 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.835 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.835 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.835 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.835 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.835 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.835 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.835 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:48.835 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.835 08:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.093 nvme0n1 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.093 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.094 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.094 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.094 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.094 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.094 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:49.094 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.094 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.352 nvme0n1 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: ]] 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.352 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.611 nvme0n1 00:25:49.611 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.611 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.611 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.611 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.611 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.611 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.611 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.611 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.611 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.611 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: ]] 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.870 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:49.871 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.871 08:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.129 nvme0n1 00:25:50.129 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.129 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.129 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.129 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.129 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.129 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.129 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.129 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.129 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.129 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.129 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: ]] 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.130 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.389 nvme0n1 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: ]] 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.389 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.390 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.390 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.390 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.390 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.390 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.390 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.390 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:50.390 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.390 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.648 nvme0n1 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.648 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.649 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.649 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.649 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.649 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.649 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.649 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.649 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.649 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.649 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.649 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.649 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.649 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.649 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:50.649 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.649 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.907 nvme0n1 00:25:50.907 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.907 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.907 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.907 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.907 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.907 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.907 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.907 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.907 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.907 08:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: ]] 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.907 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.214 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.214 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.214 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.214 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.214 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.214 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.214 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.214 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.214 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.214 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.214 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.214 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.214 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.214 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.214 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.497 nvme0n1 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: ]] 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.497 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.784 nvme0n1 00:25:51.784 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.784 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.784 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.784 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.784 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.784 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: ]] 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.043 08:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.302 nvme0n1 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:52.302 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: ]] 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.303 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.869 nvme0n1 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.869 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.870 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:52.870 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.870 08:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.436 nvme0n1 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: ]] 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.436 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.012 nvme0n1 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: ]] 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.012 08:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.579 nvme0n1 00:25:54.579 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.579 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.579 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.579 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.579 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: ]] 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.580 08:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.148 nvme0n1 00:25:55.148 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.148 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.148 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.148 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.148 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.148 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.148 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.148 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.148 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.148 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: ]] 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.406 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.972 nvme0n1 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.972 08:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.539 nvme0n1 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: ]] 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.539 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.798 nvme0n1 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: ]] 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.798 08:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.056 nvme0n1 00:25:57.056 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.056 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: ]] 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.057 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.315 nvme0n1 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: ]] 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:57.315 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.316 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.575 nvme0n1 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.575 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.834 nvme0n1 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: ]] 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.834 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.093 nvme0n1 00:25:58.093 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.093 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.093 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.093 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.093 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.093 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.093 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.093 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.093 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.093 08:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: ]] 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.093 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.353 nvme0n1 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: ]] 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.353 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.612 nvme0n1 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: ]] 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.612 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.871 nvme0n1 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.871 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.872 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.872 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.872 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.872 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.872 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.872 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.872 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:58.872 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.872 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.872 nvme0n1 00:25:58.872 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.130 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.130 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.130 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.130 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.130 08:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: ]] 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.130 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.131 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.131 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.131 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:59.131 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.131 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.389 nvme0n1 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: ]] 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:25:59.389 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.390 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.648 nvme0n1 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:59.648 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: ]] 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.649 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.907 nvme0n1 00:25:59.907 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.907 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.907 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.907 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.907 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.907 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.907 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.907 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.907 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.907 08:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: ]] 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:59.907 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.908 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:59.908 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.908 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.167 nvme0n1 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.167 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:00.425 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.426 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:00.426 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.426 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.426 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.426 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.426 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.426 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.426 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.426 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.426 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.426 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.426 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.426 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.426 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.426 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.426 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:00.426 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.426 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.701 nvme0n1 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:26:00.701 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: ]] 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.702 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.703 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.703 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.703 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.703 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.703 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.703 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:00.703 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.703 08:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.962 nvme0n1 00:26:00.962 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.962 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.962 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.962 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.962 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: ]] 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.220 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.478 nvme0n1 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: ]] 00:26:01.478 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:26:01.735 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:01.735 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.735 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.736 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.993 nvme0n1 00:26:01.993 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.993 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.993 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.993 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.993 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.993 08:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: ]] 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.993 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.581 nvme0n1 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.581 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.838 nvme0n1 00:26:02.838 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.838 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.838 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.838 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.838 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.838 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzVjZjIxOTVmNDcwOWYyMGNiNGY4ZTkwMmI3YjY0MzS9ew9p: 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: ]] 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjA0NDU3OWFlMmIyMWJmNjQ3M2MwNTgzZGQ4MjJmZjYwMzQ1OTQ5MGRmMjRiMzk4OGFkODc0MjRiZjlkYTg5NBBqmRA=: 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.096 08:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.661 nvme0n1 00:26:03.661 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.661 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.661 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.661 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.661 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.661 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.661 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.661 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.661 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.661 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.661 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.661 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: ]] 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.662 08:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.227 nvme0n1 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: ]] 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.227 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.794 nvme0n1 00:26:04.794 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.794 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.794 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.794 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.794 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.794 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.052 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.052 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.052 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.052 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.052 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.052 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.052 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:05.052 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.052 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:05.052 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:05.052 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:05.052 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:26:05.052 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:26:05.052 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:05.052 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:05.052 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhOGI3ZTY2MDA1MGUyOTM5M2NmMjEzYTg2OWY3ZDgxNzA0MTQwMzljZTYwNmNiEcbISA==: 00:26:05.052 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: ]] 00:26:05.052 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE0ZmI3M2YzOGUzN2IwMGM0MmFjZWVhYjg1NDk1MGbipAp/: 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.053 08:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.617 nvme0n1 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MzhjMDY4MzZkYmRmNmJiYWViMWU3MDU1MDZiNTI1ZjY4NzRiMWRjOTkwZTc3NjRmMWFkNzhmZGY1NWMzZqlOIck=: 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.617 08:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.183 nvme0n1 00:26:06.183 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.183 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.183 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.183 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.183 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.183 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.183 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.183 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.183 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: ]] 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.184 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.443 request: 00:26:06.443 { 00:26:06.443 "name": "nvme0", 00:26:06.443 "trtype": "tcp", 00:26:06.443 "traddr": "10.0.0.1", 00:26:06.443 "adrfam": "ipv4", 00:26:06.443 "trsvcid": "4420", 00:26:06.443 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:06.443 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:06.443 "prchk_reftag": false, 00:26:06.443 "prchk_guard": false, 00:26:06.443 "hdgst": false, 00:26:06.443 "ddgst": false, 00:26:06.443 "allow_unrecognized_csi": false, 00:26:06.443 "method": "bdev_nvme_attach_controller", 00:26:06.443 "req_id": 1 00:26:06.443 } 00:26:06.443 Got JSON-RPC error response 00:26:06.443 response: 00:26:06.443 { 00:26:06.443 "code": -5, 00:26:06.443 "message": "Input/output error" 00:26:06.443 } 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.443 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.443 request: 00:26:06.443 { 00:26:06.443 "name": "nvme0", 00:26:06.443 "trtype": "tcp", 00:26:06.443 "traddr": "10.0.0.1", 00:26:06.443 "adrfam": "ipv4", 00:26:06.443 "trsvcid": "4420", 00:26:06.443 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:06.443 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:06.443 "prchk_reftag": false, 00:26:06.443 "prchk_guard": false, 00:26:06.443 "hdgst": false, 00:26:06.443 "ddgst": false, 00:26:06.444 "dhchap_key": "key2", 00:26:06.444 "allow_unrecognized_csi": false, 00:26:06.444 "method": "bdev_nvme_attach_controller", 00:26:06.444 "req_id": 1 00:26:06.444 } 00:26:06.444 Got JSON-RPC error response 00:26:06.444 response: 00:26:06.444 { 00:26:06.444 "code": -5, 00:26:06.444 "message": "Input/output error" 00:26:06.444 } 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.444 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.703 request: 00:26:06.703 { 00:26:06.703 "name": "nvme0", 00:26:06.703 "trtype": "tcp", 00:26:06.703 "traddr": "10.0.0.1", 00:26:06.703 "adrfam": "ipv4", 00:26:06.703 "trsvcid": "4420", 00:26:06.703 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:06.703 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:06.703 "prchk_reftag": false, 00:26:06.703 "prchk_guard": false, 00:26:06.703 "hdgst": false, 00:26:06.703 "ddgst": false, 00:26:06.703 "dhchap_key": "key1", 00:26:06.703 "dhchap_ctrlr_key": "ckey2", 00:26:06.703 "allow_unrecognized_csi": false, 00:26:06.703 "method": "bdev_nvme_attach_controller", 00:26:06.703 "req_id": 1 00:26:06.703 } 00:26:06.703 Got JSON-RPC error response 00:26:06.703 response: 00:26:06.703 { 00:26:06.703 "code": -5, 00:26:06.703 "message": "Input/output error" 00:26:06.703 } 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.703 nvme0n1 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: ]] 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.703 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.962 request: 00:26:06.962 { 00:26:06.962 "name": "nvme0", 00:26:06.962 "dhchap_key": "key1", 00:26:06.962 "dhchap_ctrlr_key": "ckey2", 00:26:06.962 "method": "bdev_nvme_set_keys", 00:26:06.962 "req_id": 1 00:26:06.962 } 00:26:06.962 Got JSON-RPC error response 00:26:06.962 response: 00:26:06.962 { 00:26:06.962 "code": -13, 00:26:06.962 "message": "Permission denied" 00:26:06.962 } 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:06.962 08:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:07.896 08:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.896 08:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:07.896 08:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.896 08:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.896 08:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.896 08:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:07.896 08:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:09.271 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.271 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:09.271 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTliNjI1ZTk5NDUwY2RiMTBkNWY5N2M3MTA1M2U2YTg4MzcxMDY0YWZkMTVjYzY535RQFw==: 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: ]] 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYyNTkxZGQzZTIwYTc1MzczOGZkYjdjMTk0MDNjOTgwNTNlZjhiZGZkNTJjYzFltHnJQA==: 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.272 nvme0n1 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUwOWRkZjdlYThmMzhhYzQ5Nzc5ZGZkNjQ3ZTI0NTKWg0Pd: 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: ]] 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc2NTFiOGIwNzgzZGI0NTJmMDc5YTVkZWYyOTVmYjXs2pZw: 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.272 request: 00:26:09.272 { 00:26:09.272 "name": "nvme0", 00:26:09.272 "dhchap_key": "key2", 00:26:09.272 "dhchap_ctrlr_key": "ckey1", 00:26:09.272 "method": "bdev_nvme_set_keys", 00:26:09.272 "req_id": 1 00:26:09.272 } 00:26:09.272 Got JSON-RPC error response 00:26:09.272 response: 00:26:09.272 { 00:26:09.272 "code": -13, 00:26:09.272 "message": "Permission denied" 00:26:09.272 } 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:09.272 08:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:10.647 rmmod nvme_tcp 00:26:10.647 rmmod nvme_fabrics 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2578868 ']' 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2578868 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2578868 ']' 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2578868 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2578868 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2578868' 00:26:10.647 killing process with pid 2578868 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2578868 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2578868 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.647 08:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.181 08:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:13.181 08:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:13.181 08:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:13.181 08:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:13.181 08:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:13.181 08:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:13.181 08:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:13.181 08:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:13.181 08:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:13.181 08:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:13.181 08:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:13.181 08:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:13.181 08:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:15.715 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:15.715 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:15.715 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:15.715 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:15.715 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:15.715 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:15.715 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:15.715 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:15.715 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:15.715 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:15.715 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:15.715 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:15.715 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:15.715 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:15.715 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:15.715 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:16.282 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:16.541 08:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.MeL /tmp/spdk.key-null.wxP /tmp/spdk.key-sha256.B8j /tmp/spdk.key-sha384.3tM /tmp/spdk.key-sha512.vss /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:16.541 08:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:19.074 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:19.074 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:19.074 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:19.074 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:19.074 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:19.074 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:19.074 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:19.074 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:19.074 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:19.074 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:19.074 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:19.074 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:19.074 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:19.074 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:19.074 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:19.074 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:19.074 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:19.333 00:26:19.333 real 0m53.393s 00:26:19.333 user 0m48.468s 00:26:19.333 sys 0m12.091s 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.333 ************************************ 00:26:19.333 END TEST nvmf_auth_host 00:26:19.333 ************************************ 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.333 ************************************ 00:26:19.333 START TEST nvmf_digest 00:26:19.333 ************************************ 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:19.333 * Looking for test storage... 00:26:19.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:19.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.333 --rc genhtml_branch_coverage=1 00:26:19.333 --rc genhtml_function_coverage=1 00:26:19.333 --rc genhtml_legend=1 00:26:19.333 --rc geninfo_all_blocks=1 00:26:19.333 --rc geninfo_unexecuted_blocks=1 00:26:19.333 00:26:19.333 ' 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:19.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.333 --rc genhtml_branch_coverage=1 00:26:19.333 --rc genhtml_function_coverage=1 00:26:19.333 --rc genhtml_legend=1 00:26:19.333 --rc geninfo_all_blocks=1 00:26:19.333 --rc geninfo_unexecuted_blocks=1 00:26:19.333 00:26:19.333 ' 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:19.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.333 --rc genhtml_branch_coverage=1 00:26:19.333 --rc genhtml_function_coverage=1 00:26:19.333 --rc genhtml_legend=1 00:26:19.333 --rc geninfo_all_blocks=1 00:26:19.333 --rc geninfo_unexecuted_blocks=1 00:26:19.333 00:26:19.333 ' 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:19.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.333 --rc genhtml_branch_coverage=1 00:26:19.333 --rc genhtml_function_coverage=1 00:26:19.333 --rc genhtml_legend=1 00:26:19.333 --rc geninfo_all_blocks=1 00:26:19.333 --rc geninfo_unexecuted_blocks=1 00:26:19.333 00:26:19.333 ' 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.333 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.334 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.334 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.334 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.334 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.334 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.334 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.334 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:19.592 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:19.592 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:19.592 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.592 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:19.592 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:19.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:19.593 08:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:24.862 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:24.862 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:24.862 Found net devices under 0000:86:00.0: cvl_0_0 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:24.862 Found net devices under 0000:86:00.1: cvl_0_1 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:24.862 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:25.122 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:25.122 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:25.122 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:25.122 08:09:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:25.122 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:25.122 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:25.122 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:25.122 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:25.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:25.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:26:25.122 00:26:25.122 --- 10.0.0.2 ping statistics --- 00:26:25.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.122 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:26:25.122 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:25.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:25.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:26:25.122 00:26:25.122 --- 10.0.0.1 ping statistics --- 00:26:25.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.122 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:25.123 ************************************ 00:26:25.123 START TEST nvmf_digest_clean 00:26:25.123 ************************************ 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2593179 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2593179 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2593179 ']' 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.123 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:25.123 [2024-11-27 08:09:19.212757] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:26:25.123 [2024-11-27 08:09:19.212804] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.382 [2024-11-27 08:09:19.280075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.382 [2024-11-27 08:09:19.320725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.382 [2024-11-27 08:09:19.320761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.382 [2024-11-27 08:09:19.320770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.382 [2024-11-27 08:09:19.320777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.382 [2024-11-27 08:09:19.320782] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.382 [2024-11-27 08:09:19.321365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.382 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.382 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:25.382 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:25.382 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:25.382 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:25.382 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.382 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:25.382 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:25.382 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:25.382 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.382 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:25.382 null0 00:26:25.641 [2024-11-27 08:09:19.491867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.641 [2024-11-27 08:09:19.516084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2593276 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2593276 /var/tmp/bperf.sock 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2593276 ']' 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:25.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:25.641 [2024-11-27 08:09:19.567381] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:26:25.641 [2024-11-27 08:09:19.567424] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2593276 ] 00:26:25.641 [2024-11-27 08:09:19.629890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.641 [2024-11-27 08:09:19.673256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:25.641 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:25.900 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.900 08:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:26.467 nvme0n1 00:26:26.467 08:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:26.467 08:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:26.467 Running I/O for 2 seconds... 00:26:28.778 23659.00 IOPS, 92.42 MiB/s [2024-11-27T07:09:22.887Z] 24463.00 IOPS, 95.56 MiB/s 00:26:28.778 Latency(us) 00:26:28.778 [2024-11-27T07:09:22.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.778 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:28.778 nvme0n1 : 2.01 24467.49 95.58 0.00 0.00 5225.01 2478.97 16298.52 00:26:28.778 [2024-11-27T07:09:22.887Z] =================================================================================================================== 00:26:28.778 [2024-11-27T07:09:22.887Z] Total : 24467.49 95.58 0.00 0.00 5225.01 2478.97 16298.52 00:26:28.778 { 00:26:28.778 "results": [ 00:26:28.778 { 00:26:28.778 "job": "nvme0n1", 00:26:28.778 "core_mask": "0x2", 00:26:28.778 "workload": "randread", 00:26:28.778 "status": "finished", 00:26:28.778 "queue_depth": 128, 00:26:28.778 "io_size": 4096, 00:26:28.778 "runtime": 2.005559, 00:26:28.778 "iops": 24467.49260430633, 00:26:28.778 "mibps": 95.57614298557161, 00:26:28.778 "io_failed": 0, 00:26:28.778 "io_timeout": 0, 00:26:28.778 "avg_latency_us": 5225.009971390168, 00:26:28.778 "min_latency_us": 2478.9704347826087, 00:26:28.778 "max_latency_us": 16298.518260869565 00:26:28.778 } 00:26:28.778 ], 00:26:28.778 "core_count": 1 00:26:28.778 } 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:28.778 | select(.opcode=="crc32c") 00:26:28.778 | "\(.module_name) \(.executed)"' 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2593276 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2593276 ']' 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2593276 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2593276 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2593276' 00:26:28.778 killing process with pid 2593276 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2593276 00:26:28.778 Received shutdown signal, test time was about 2.000000 seconds 00:26:28.778 00:26:28.778 Latency(us) 00:26:28.778 [2024-11-27T07:09:22.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.778 [2024-11-27T07:09:22.887Z] =================================================================================================================== 00:26:28.778 [2024-11-27T07:09:22.887Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:28.778 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2593276 00:26:29.037 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:29.037 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:29.037 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:29.037 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:29.037 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:29.037 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:29.037 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:29.037 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2593886 00:26:29.037 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2593886 /var/tmp/bperf.sock 00:26:29.037 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:29.037 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2593886 ']' 00:26:29.037 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:29.037 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.037 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:29.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:29.037 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.037 08:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:29.037 [2024-11-27 08:09:22.989688] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:26:29.037 [2024-11-27 08:09:22.989738] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2593886 ] 00:26:29.037 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:29.037 Zero copy mechanism will not be used. 00:26:29.037 [2024-11-27 08:09:23.053625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.037 [2024-11-27 08:09:23.094873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.037 08:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.037 08:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:29.037 08:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:29.037 08:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:29.037 08:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:29.296 08:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.296 08:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.863 nvme0n1 00:26:29.863 08:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:29.863 08:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:29.863 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:29.863 Zero copy mechanism will not be used. 00:26:29.863 Running I/O for 2 seconds... 00:26:32.173 5590.00 IOPS, 698.75 MiB/s [2024-11-27T07:09:26.282Z] 5391.00 IOPS, 673.88 MiB/s 00:26:32.173 Latency(us) 00:26:32.173 [2024-11-27T07:09:26.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.173 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:32.173 nvme0n1 : 2.00 5392.71 674.09 0.00 0.00 2964.12 658.92 12252.38 00:26:32.173 [2024-11-27T07:09:26.282Z] =================================================================================================================== 00:26:32.173 [2024-11-27T07:09:26.282Z] Total : 5392.71 674.09 0.00 0.00 2964.12 658.92 12252.38 00:26:32.173 { 00:26:32.173 "results": [ 00:26:32.173 { 00:26:32.173 "job": "nvme0n1", 00:26:32.173 "core_mask": "0x2", 00:26:32.173 "workload": "randread", 00:26:32.173 "status": "finished", 00:26:32.173 "queue_depth": 16, 00:26:32.173 "io_size": 131072, 00:26:32.173 "runtime": 2.002332, 00:26:32.173 "iops": 5392.712097694089, 00:26:32.173 "mibps": 674.0890122117611, 00:26:32.173 "io_failed": 0, 00:26:32.173 "io_timeout": 0, 00:26:32.173 "avg_latency_us": 2964.1249558291797, 00:26:32.173 "min_latency_us": 658.9217391304347, 00:26:32.173 "max_latency_us": 12252.382608695652 00:26:32.173 } 00:26:32.173 ], 00:26:32.173 "core_count": 1 00:26:32.173 } 00:26:32.173 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:32.173 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:32.173 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:32.173 | select(.opcode=="crc32c") 00:26:32.173 | "\(.module_name) \(.executed)"' 00:26:32.173 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:32.173 08:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:32.173 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:32.173 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:32.173 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:32.173 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:32.173 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2593886 00:26:32.173 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2593886 ']' 00:26:32.173 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2593886 00:26:32.173 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:32.173 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.174 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2593886 00:26:32.174 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:32.174 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:32.174 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2593886' 00:26:32.174 killing process with pid 2593886 00:26:32.174 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2593886 00:26:32.174 Received shutdown signal, test time was about 2.000000 seconds 00:26:32.174 00:26:32.174 Latency(us) 00:26:32.174 [2024-11-27T07:09:26.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.174 [2024-11-27T07:09:26.283Z] =================================================================================================================== 00:26:32.174 [2024-11-27T07:09:26.283Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:32.174 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2593886 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2594369 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2594369 /var/tmp/bperf.sock 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2594369 ']' 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:32.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:32.432 [2024-11-27 08:09:26.366603] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:26:32.432 [2024-11-27 08:09:26.366654] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2594369 ] 00:26:32.432 [2024-11-27 08:09:26.428629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.432 [2024-11-27 08:09:26.471500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:32.432 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:32.690 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.690 08:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.948 nvme0n1 00:26:33.206 08:09:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:33.206 08:09:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:33.206 Running I/O for 2 seconds... 00:26:35.075 27623.00 IOPS, 107.90 MiB/s [2024-11-27T07:09:29.184Z] 27766.50 IOPS, 108.46 MiB/s 00:26:35.075 Latency(us) 00:26:35.075 [2024-11-27T07:09:29.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.075 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:35.075 nvme0n1 : 2.01 27786.30 108.54 0.00 0.00 4600.02 2421.98 11853.47 00:26:35.075 [2024-11-27T07:09:29.184Z] =================================================================================================================== 00:26:35.075 [2024-11-27T07:09:29.184Z] Total : 27786.30 108.54 0.00 0.00 4600.02 2421.98 11853.47 00:26:35.075 { 00:26:35.075 "results": [ 00:26:35.075 { 00:26:35.075 "job": "nvme0n1", 00:26:35.075 "core_mask": "0x2", 00:26:35.076 "workload": "randwrite", 00:26:35.076 "status": "finished", 00:26:35.076 "queue_depth": 128, 00:26:35.076 "io_size": 4096, 00:26:35.076 "runtime": 2.005449, 00:26:35.076 "iops": 27786.296235905276, 00:26:35.076 "mibps": 108.54021967150499, 00:26:35.076 "io_failed": 0, 00:26:35.076 "io_timeout": 0, 00:26:35.076 "avg_latency_us": 4600.0222642651825, 00:26:35.076 "min_latency_us": 2421.9826086956523, 00:26:35.076 "max_latency_us": 11853.467826086957 00:26:35.076 } 00:26:35.076 ], 00:26:35.076 "core_count": 1 00:26:35.076 } 00:26:35.333 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:35.333 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:35.333 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:35.333 | select(.opcode=="crc32c") 00:26:35.333 | "\(.module_name) \(.executed)"' 00:26:35.333 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:35.333 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:35.333 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:35.333 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:35.333 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:35.333 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:35.333 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2594369 00:26:35.333 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2594369 ']' 00:26:35.333 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2594369 00:26:35.333 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:35.333 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:35.333 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2594369 00:26:35.333 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:35.333 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:35.334 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2594369' 00:26:35.334 killing process with pid 2594369 00:26:35.334 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2594369 00:26:35.334 Received shutdown signal, test time was about 2.000000 seconds 00:26:35.334 00:26:35.334 Latency(us) 00:26:35.334 [2024-11-27T07:09:29.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.334 [2024-11-27T07:09:29.443Z] =================================================================================================================== 00:26:35.334 [2024-11-27T07:09:29.443Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:35.334 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2594369 00:26:35.592 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:35.592 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:35.592 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:35.592 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:35.592 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:35.592 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:35.592 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:35.592 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2594940 00:26:35.592 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2594940 /var/tmp/bperf.sock 00:26:35.592 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:35.592 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2594940 ']' 00:26:35.592 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:35.592 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.592 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:35.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:35.592 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.592 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:35.592 [2024-11-27 08:09:29.631233] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:26:35.592 [2024-11-27 08:09:29.631284] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2594940 ] 00:26:35.593 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:35.593 Zero copy mechanism will not be used. 00:26:35.593 [2024-11-27 08:09:29.694452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.851 [2024-11-27 08:09:29.735573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.851 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.851 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:35.851 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:35.851 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:35.851 08:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:36.111 08:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.111 08:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.429 nvme0n1 00:26:36.429 08:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:36.429 08:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:36.429 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:36.429 Zero copy mechanism will not be used. 00:26:36.429 Running I/O for 2 seconds... 00:26:38.777 5823.00 IOPS, 727.88 MiB/s [2024-11-27T07:09:32.886Z] 6041.00 IOPS, 755.12 MiB/s 00:26:38.777 Latency(us) 00:26:38.777 [2024-11-27T07:09:32.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.777 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:38.777 nvme0n1 : 2.00 6039.53 754.94 0.00 0.00 2644.75 1282.23 4587.52 00:26:38.777 [2024-11-27T07:09:32.886Z] =================================================================================================================== 00:26:38.777 [2024-11-27T07:09:32.886Z] Total : 6039.53 754.94 0.00 0.00 2644.75 1282.23 4587.52 00:26:38.777 { 00:26:38.777 "results": [ 00:26:38.777 { 00:26:38.777 "job": "nvme0n1", 00:26:38.777 "core_mask": "0x2", 00:26:38.777 "workload": "randwrite", 00:26:38.777 "status": "finished", 00:26:38.777 "queue_depth": 16, 00:26:38.777 "io_size": 131072, 00:26:38.777 "runtime": 2.003799, 00:26:38.777 "iops": 6039.527916722186, 00:26:38.777 "mibps": 754.9409895902733, 00:26:38.777 "io_failed": 0, 00:26:38.777 "io_timeout": 0, 00:26:38.777 "avg_latency_us": 2644.7508203459006, 00:26:38.777 "min_latency_us": 1282.2260869565218, 00:26:38.777 "max_latency_us": 4587.52 00:26:38.777 } 00:26:38.777 ], 00:26:38.777 "core_count": 1 00:26:38.777 } 00:26:38.777 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:38.777 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:38.777 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:38.777 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:38.777 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:38.777 | select(.opcode=="crc32c") 00:26:38.777 | "\(.module_name) \(.executed)"' 00:26:38.777 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:38.777 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:38.777 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:38.778 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:38.778 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2594940 00:26:38.778 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2594940 ']' 00:26:38.778 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2594940 00:26:38.778 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:38.778 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.778 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2594940 00:26:38.778 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:38.778 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:38.778 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2594940' 00:26:38.778 killing process with pid 2594940 00:26:38.778 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2594940 00:26:38.778 Received shutdown signal, test time was about 2.000000 seconds 00:26:38.778 00:26:38.778 Latency(us) 00:26:38.778 [2024-11-27T07:09:32.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.778 [2024-11-27T07:09:32.887Z] =================================================================================================================== 00:26:38.778 [2024-11-27T07:09:32.887Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:38.778 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2594940 00:26:39.036 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2593179 00:26:39.036 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2593179 ']' 00:26:39.036 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2593179 00:26:39.036 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:39.036 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:39.036 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2593179 00:26:39.036 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:39.036 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:39.036 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2593179' 00:26:39.036 killing process with pid 2593179 00:26:39.036 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2593179 00:26:39.036 08:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2593179 00:26:39.036 00:26:39.036 real 0m13.965s 00:26:39.036 user 0m26.819s 00:26:39.036 sys 0m4.391s 00:26:39.036 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.036 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:39.036 ************************************ 00:26:39.036 END TEST nvmf_digest_clean 00:26:39.036 ************************************ 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:39.295 ************************************ 00:26:39.295 START TEST nvmf_digest_error 00:26:39.295 ************************************ 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2595555 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2595555 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2595555 ']' 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.295 [2024-11-27 08:09:33.238803] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:26:39.295 [2024-11-27 08:09:33.238864] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.295 [2024-11-27 08:09:33.305798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.295 [2024-11-27 08:09:33.346963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.295 [2024-11-27 08:09:33.346998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.295 [2024-11-27 08:09:33.347005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.295 [2024-11-27 08:09:33.347011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.295 [2024-11-27 08:09:33.347016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.295 [2024-11-27 08:09:33.347547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:39.295 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.554 [2024-11-27 08:09:33.428027] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.554 null0 00:26:39.554 [2024-11-27 08:09:33.525063] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.554 [2024-11-27 08:09:33.549260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2595582 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2595582 /var/tmp/bperf.sock 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2595582 ']' 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:39.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.554 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.554 [2024-11-27 08:09:33.587831] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:26:39.554 [2024-11-27 08:09:33.587873] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2595582 ] 00:26:39.554 [2024-11-27 08:09:33.644036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.813 [2024-11-27 08:09:33.686132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.813 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.813 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:39.813 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:39.813 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:40.072 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:40.072 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.072 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:40.072 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.072 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.072 08:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.332 nvme0n1 00:26:40.332 08:09:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:40.332 08:09:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.332 08:09:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:40.332 08:09:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.332 08:09:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:40.332 08:09:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:40.332 Running I/O for 2 seconds... 00:26:40.332 [2024-11-27 08:09:34.381958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.332 [2024-11-27 08:09:34.381992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.332 [2024-11-27 08:09:34.382002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.332 [2024-11-27 08:09:34.391203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.332 [2024-11-27 08:09:34.391228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.332 [2024-11-27 08:09:34.391237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.332 [2024-11-27 08:09:34.401181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.332 [2024-11-27 08:09:34.401204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.332 [2024-11-27 08:09:34.401214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.332 [2024-11-27 08:09:34.410920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.332 [2024-11-27 08:09:34.410941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.332 [2024-11-27 08:09:34.410955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.332 [2024-11-27 08:09:34.420274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.332 [2024-11-27 08:09:34.420296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.332 [2024-11-27 08:09:34.420305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.332 [2024-11-27 08:09:34.430174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.332 [2024-11-27 08:09:34.430196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.332 [2024-11-27 08:09:34.430204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.591 [2024-11-27 08:09:34.442020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.591 [2024-11-27 08:09:34.442044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.591 [2024-11-27 08:09:34.442053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.591 [2024-11-27 08:09:34.451015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.591 [2024-11-27 08:09:34.451038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.591 [2024-11-27 08:09:34.451046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.591 [2024-11-27 08:09:34.463286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.591 [2024-11-27 08:09:34.463308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.591 [2024-11-27 08:09:34.463316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.591 [2024-11-27 08:09:34.476364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.591 [2024-11-27 08:09:34.476386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.591 [2024-11-27 08:09:34.476395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.591 [2024-11-27 08:09:34.489013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.591 [2024-11-27 08:09:34.489035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.591 [2024-11-27 08:09:34.489043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.591 [2024-11-27 08:09:34.501979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.591 [2024-11-27 08:09:34.502001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.591 [2024-11-27 08:09:34.502009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.591 [2024-11-27 08:09:34.514612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.591 [2024-11-27 08:09:34.514634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.591 [2024-11-27 08:09:34.514642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.591 [2024-11-27 08:09:34.525957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.591 [2024-11-27 08:09:34.525977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.591 [2024-11-27 08:09:34.525989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.591 [2024-11-27 08:09:34.534941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.591 [2024-11-27 08:09:34.534966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.591 [2024-11-27 08:09:34.534974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.591 [2024-11-27 08:09:34.547981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.591 [2024-11-27 08:09:34.548002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.592 [2024-11-27 08:09:34.548010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.592 [2024-11-27 08:09:34.560924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.592 [2024-11-27 08:09:34.560945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.592 [2024-11-27 08:09:34.560960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.592 [2024-11-27 08:09:34.573908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.592 [2024-11-27 08:09:34.573930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.592 [2024-11-27 08:09:34.573938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.592 [2024-11-27 08:09:34.587485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.592 [2024-11-27 08:09:34.587506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.592 [2024-11-27 08:09:34.587515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.592 [2024-11-27 08:09:34.595946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.592 [2024-11-27 08:09:34.595971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.592 [2024-11-27 08:09:34.595979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.592 [2024-11-27 08:09:34.608668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.592 [2024-11-27 08:09:34.608689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.592 [2024-11-27 08:09:34.608697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.592 [2024-11-27 08:09:34.620766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.592 [2024-11-27 08:09:34.620787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.592 [2024-11-27 08:09:34.620796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.592 [2024-11-27 08:09:34.633391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.592 [2024-11-27 08:09:34.633416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.592 [2024-11-27 08:09:34.633424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.592 [2024-11-27 08:09:34.646825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.592 [2024-11-27 08:09:34.646849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.592 [2024-11-27 08:09:34.646858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.592 [2024-11-27 08:09:34.659663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.592 [2024-11-27 08:09:34.659686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.592 [2024-11-27 08:09:34.659695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.592 [2024-11-27 08:09:34.669073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.592 [2024-11-27 08:09:34.669095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.592 [2024-11-27 08:09:34.669103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.592 [2024-11-27 08:09:34.679014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.592 [2024-11-27 08:09:34.679035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.592 [2024-11-27 08:09:34.679044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.592 [2024-11-27 08:09:34.688107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.592 [2024-11-27 08:09:34.688128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.592 [2024-11-27 08:09:34.688137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.592 [2024-11-27 08:09:34.698620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.592 [2024-11-27 08:09:34.698642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.592 [2024-11-27 08:09:34.698651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.851 [2024-11-27 08:09:34.707819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.851 [2024-11-27 08:09:34.707842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.851 [2024-11-27 08:09:34.707851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.851 [2024-11-27 08:09:34.721297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.851 [2024-11-27 08:09:34.721319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.851 [2024-11-27 08:09:34.721328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.851 [2024-11-27 08:09:34.732440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.851 [2024-11-27 08:09:34.732461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.851 [2024-11-27 08:09:34.732470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.851 [2024-11-27 08:09:34.741723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.851 [2024-11-27 08:09:34.741744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.851 [2024-11-27 08:09:34.741753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.851 [2024-11-27 08:09:34.752212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.851 [2024-11-27 08:09:34.752233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.851 [2024-11-27 08:09:34.752242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.851 [2024-11-27 08:09:34.761493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.851 [2024-11-27 08:09:34.761514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.851 [2024-11-27 08:09:34.761522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.851 [2024-11-27 08:09:34.773036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.851 [2024-11-27 08:09:34.773057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.851 [2024-11-27 08:09:34.773066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.851 [2024-11-27 08:09:34.785342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.851 [2024-11-27 08:09:34.785363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.851 [2024-11-27 08:09:34.785372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.851 [2024-11-27 08:09:34.794259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.851 [2024-11-27 08:09:34.794281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.851 [2024-11-27 08:09:34.794290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.851 [2024-11-27 08:09:34.805744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.851 [2024-11-27 08:09:34.805766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.851 [2024-11-27 08:09:34.805774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.852 [2024-11-27 08:09:34.818866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.852 [2024-11-27 08:09:34.818887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.852 [2024-11-27 08:09:34.818902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.852 [2024-11-27 08:09:34.831779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.852 [2024-11-27 08:09:34.831800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.852 [2024-11-27 08:09:34.831808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.852 [2024-11-27 08:09:34.840759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.852 [2024-11-27 08:09:34.840780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.852 [2024-11-27 08:09:34.840788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.852 [2024-11-27 08:09:34.851912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.852 [2024-11-27 08:09:34.851932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.852 [2024-11-27 08:09:34.851941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.852 [2024-11-27 08:09:34.864620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.852 [2024-11-27 08:09:34.864640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.852 [2024-11-27 08:09:34.864648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.852 [2024-11-27 08:09:34.876842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.852 [2024-11-27 08:09:34.876862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.852 [2024-11-27 08:09:34.876870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.852 [2024-11-27 08:09:34.885754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.852 [2024-11-27 08:09:34.885775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.852 [2024-11-27 08:09:34.885783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.852 [2024-11-27 08:09:34.899059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.852 [2024-11-27 08:09:34.899083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.852 [2024-11-27 08:09:34.899092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.852 [2024-11-27 08:09:34.910713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.852 [2024-11-27 08:09:34.910735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.852 [2024-11-27 08:09:34.910743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.852 [2024-11-27 08:09:34.919304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.852 [2024-11-27 08:09:34.919328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.852 [2024-11-27 08:09:34.919336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.852 [2024-11-27 08:09:34.930716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.852 [2024-11-27 08:09:34.930738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.852 [2024-11-27 08:09:34.930747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.852 [2024-11-27 08:09:34.943706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.852 [2024-11-27 08:09:34.943728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.852 [2024-11-27 08:09:34.943736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.852 [2024-11-27 08:09:34.956805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:40.852 [2024-11-27 08:09:34.956832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.852 [2024-11-27 08:09:34.956842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.112 [2024-11-27 08:09:34.966129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.112 [2024-11-27 08:09:34.966152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.112 [2024-11-27 08:09:34.966161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.112 [2024-11-27 08:09:34.978637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.112 [2024-11-27 08:09:34.978659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.112 [2024-11-27 08:09:34.978668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.112 [2024-11-27 08:09:34.990899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.112 [2024-11-27 08:09:34.990922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.112 [2024-11-27 08:09:34.990930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.112 [2024-11-27 08:09:34.999495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.112 [2024-11-27 08:09:34.999517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.112 [2024-11-27 08:09:34.999525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.112 [2024-11-27 08:09:35.009989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.112 [2024-11-27 08:09:35.010011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.112 [2024-11-27 08:09:35.010019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.112 [2024-11-27 08:09:35.022753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.112 [2024-11-27 08:09:35.022775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.112 [2024-11-27 08:09:35.022783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.112 [2024-11-27 08:09:35.030744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.112 [2024-11-27 08:09:35.030764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.112 [2024-11-27 08:09:35.030773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.112 [2024-11-27 08:09:35.042846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.112 [2024-11-27 08:09:35.042868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.112 [2024-11-27 08:09:35.042877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.112 [2024-11-27 08:09:35.055888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.112 [2024-11-27 08:09:35.055910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.112 [2024-11-27 08:09:35.055918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.112 [2024-11-27 08:09:35.068067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.112 [2024-11-27 08:09:35.068089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.112 [2024-11-27 08:09:35.068097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.112 [2024-11-27 08:09:35.077377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.112 [2024-11-27 08:09:35.077398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.112 [2024-11-27 08:09:35.077406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.112 [2024-11-27 08:09:35.086590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.112 [2024-11-27 08:09:35.086611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.112 [2024-11-27 08:09:35.086619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.112 [2024-11-27 08:09:35.098255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.112 [2024-11-27 08:09:35.098276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.112 [2024-11-27 08:09:35.098285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.112 [2024-11-27 08:09:35.107672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.112 [2024-11-27 08:09:35.107693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.112 [2024-11-27 08:09:35.107704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.113 [2024-11-27 08:09:35.120098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.113 [2024-11-27 08:09:35.120120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.113 [2024-11-27 08:09:35.120128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.113 [2024-11-27 08:09:35.128901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.113 [2024-11-27 08:09:35.128923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.113 [2024-11-27 08:09:35.128932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.113 [2024-11-27 08:09:35.139610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.113 [2024-11-27 08:09:35.139634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.113 [2024-11-27 08:09:35.139642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.113 [2024-11-27 08:09:35.149994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.113 [2024-11-27 08:09:35.150018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.113 [2024-11-27 08:09:35.150027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.113 [2024-11-27 08:09:35.159238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.113 [2024-11-27 08:09:35.159260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.113 [2024-11-27 08:09:35.159269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.113 [2024-11-27 08:09:35.171353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.113 [2024-11-27 08:09:35.171376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.113 [2024-11-27 08:09:35.171385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.113 [2024-11-27 08:09:35.180435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.113 [2024-11-27 08:09:35.180455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.113 [2024-11-27 08:09:35.180463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.113 [2024-11-27 08:09:35.193565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.113 [2024-11-27 08:09:35.193588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.113 [2024-11-27 08:09:35.193596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.113 [2024-11-27 08:09:35.204557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.113 [2024-11-27 08:09:35.204578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.113 [2024-11-27 08:09:35.204587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.113 [2024-11-27 08:09:35.215118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.113 [2024-11-27 08:09:35.215140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.113 [2024-11-27 08:09:35.215149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.372 [2024-11-27 08:09:35.224436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.372 [2024-11-27 08:09:35.224460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.372 [2024-11-27 08:09:35.224469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.372 [2024-11-27 08:09:35.236239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.372 [2024-11-27 08:09:35.236262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.372 [2024-11-27 08:09:35.236271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.372 [2024-11-27 08:09:35.249552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.372 [2024-11-27 08:09:35.249574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.372 [2024-11-27 08:09:35.249584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.372 [2024-11-27 08:09:35.258340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.372 [2024-11-27 08:09:35.258362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.372 [2024-11-27 08:09:35.258370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.372 [2024-11-27 08:09:35.268349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.372 [2024-11-27 08:09:35.268371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.372 [2024-11-27 08:09:35.268380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.372 [2024-11-27 08:09:35.278114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.372 [2024-11-27 08:09:35.278135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.372 [2024-11-27 08:09:35.278144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.373 [2024-11-27 08:09:35.288746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.373 [2024-11-27 08:09:35.288768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.373 [2024-11-27 08:09:35.288779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.373 [2024-11-27 08:09:35.297258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.373 [2024-11-27 08:09:35.297280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.373 [2024-11-27 08:09:35.297288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.373 [2024-11-27 08:09:35.308279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.373 [2024-11-27 08:09:35.308301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.373 [2024-11-27 08:09:35.308309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.373 [2024-11-27 08:09:35.319416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.373 [2024-11-27 08:09:35.319439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.373 [2024-11-27 08:09:35.319447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.373 [2024-11-27 08:09:35.329680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.373 [2024-11-27 08:09:35.329702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.373 [2024-11-27 08:09:35.329710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.373 [2024-11-27 08:09:35.338162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.373 [2024-11-27 08:09:35.338184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.373 [2024-11-27 08:09:35.338192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.373 [2024-11-27 08:09:35.349625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.373 [2024-11-27 08:09:35.349647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.373 [2024-11-27 08:09:35.349656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.373 [2024-11-27 08:09:35.358335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.373 [2024-11-27 08:09:35.358357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.373 [2024-11-27 08:09:35.358366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.373 23128.00 IOPS, 90.34 MiB/s [2024-11-27T07:09:35.482Z] [2024-11-27 08:09:35.370503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.373 [2024-11-27 08:09:35.370526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.373 [2024-11-27 08:09:35.370534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.373 [2024-11-27 08:09:35.381282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.373 [2024-11-27 08:09:35.381307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.373 [2024-11-27 08:09:35.381315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.373 [2024-11-27 08:09:35.390241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.373 [2024-11-27 08:09:35.390262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.373 [2024-11-27 08:09:35.390270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.373 [2024-11-27 08:09:35.404527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.373 [2024-11-27 08:09:35.404551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.373 [2024-11-27 08:09:35.404560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.373 [2024-11-27 08:09:35.416920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.373 [2024-11-27 08:09:35.416943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.373 [2024-11-27 08:09:35.416959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.373 [2024-11-27 08:09:35.426275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.373 [2024-11-27 08:09:35.426297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.373 [2024-11-27 08:09:35.426305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.373 [2024-11-27 08:09:35.436066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.373 [2024-11-27 08:09:35.436088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.373 [2024-11-27 08:09:35.436096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.373 [2024-11-27 08:09:35.447394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.373 [2024-11-27 08:09:35.447416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.373 [2024-11-27 08:09:35.447424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.373 [2024-11-27 08:09:35.455872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.373 [2024-11-27 08:09:35.455894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.374 [2024-11-27 08:09:35.455903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.374 [2024-11-27 08:09:35.466767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.374 [2024-11-27 08:09:35.466789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.374 [2024-11-27 08:09:35.466797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.374 [2024-11-27 08:09:35.475146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.374 [2024-11-27 08:09:35.475178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.374 [2024-11-27 08:09:35.475187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.633 [2024-11-27 08:09:35.485466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.633 [2024-11-27 08:09:35.485489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.633 [2024-11-27 08:09:35.485498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.633 [2024-11-27 08:09:35.496495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.633 [2024-11-27 08:09:35.496518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.633 [2024-11-27 08:09:35.496527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.633 [2024-11-27 08:09:35.506266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.633 [2024-11-27 08:09:35.506287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.633 [2024-11-27 08:09:35.506296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.633 [2024-11-27 08:09:35.514537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.633 [2024-11-27 08:09:35.514557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.633 [2024-11-27 08:09:35.514565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.633 [2024-11-27 08:09:35.525187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.633 [2024-11-27 08:09:35.525208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.633 [2024-11-27 08:09:35.525216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.633 [2024-11-27 08:09:35.537866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.633 [2024-11-27 08:09:35.537888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.633 [2024-11-27 08:09:35.537896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.633 [2024-11-27 08:09:35.549894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.633 [2024-11-27 08:09:35.549915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.633 [2024-11-27 08:09:35.549924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.633 [2024-11-27 08:09:35.558514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.633 [2024-11-27 08:09:35.558535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.633 [2024-11-27 08:09:35.558546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.633 [2024-11-27 08:09:35.570928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.634 [2024-11-27 08:09:35.570953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.634 [2024-11-27 08:09:35.570962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.634 [2024-11-27 08:09:35.579753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.634 [2024-11-27 08:09:35.579774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.634 [2024-11-27 08:09:35.579782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.634 [2024-11-27 08:09:35.591426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.634 [2024-11-27 08:09:35.591447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.634 [2024-11-27 08:09:35.591455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.634 [2024-11-27 08:09:35.600575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.634 [2024-11-27 08:09:35.600596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.634 [2024-11-27 08:09:35.600605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.634 [2024-11-27 08:09:35.609682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.634 [2024-11-27 08:09:35.609703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.634 [2024-11-27 08:09:35.609711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.634 [2024-11-27 08:09:35.620834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.634 [2024-11-27 08:09:35.620855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.634 [2024-11-27 08:09:35.620863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.634 [2024-11-27 08:09:35.629025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.634 [2024-11-27 08:09:35.629045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.634 [2024-11-27 08:09:35.629053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.634 [2024-11-27 08:09:35.639761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.634 [2024-11-27 08:09:35.639783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.634 [2024-11-27 08:09:35.639791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.634 [2024-11-27 08:09:35.651677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.634 [2024-11-27 08:09:35.651700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.634 [2024-11-27 08:09:35.651709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.634 [2024-11-27 08:09:35.660384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.634 [2024-11-27 08:09:35.660406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.634 [2024-11-27 08:09:35.660415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.634 [2024-11-27 08:09:35.671705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.634 [2024-11-27 08:09:35.671727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.634 [2024-11-27 08:09:35.671736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.634 [2024-11-27 08:09:35.682699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.634 [2024-11-27 08:09:35.682720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.634 [2024-11-27 08:09:35.682729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.634 [2024-11-27 08:09:35.691308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.634 [2024-11-27 08:09:35.691329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.634 [2024-11-27 08:09:35.691338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.634 [2024-11-27 08:09:35.702986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.634 [2024-11-27 08:09:35.703008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.634 [2024-11-27 08:09:35.703016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.634 [2024-11-27 08:09:35.715678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.634 [2024-11-27 08:09:35.715700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.634 [2024-11-27 08:09:35.715709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.634 [2024-11-27 08:09:35.725699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.634 [2024-11-27 08:09:35.725719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.634 [2024-11-27 08:09:35.725728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.634 [2024-11-27 08:09:35.734530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.634 [2024-11-27 08:09:35.734551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.634 [2024-11-27 08:09:35.734562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.745046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.745069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.894 [2024-11-27 08:09:35.745079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.755355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.755377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.894 [2024-11-27 08:09:35.755387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.763787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.763809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.894 [2024-11-27 08:09:35.763818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.774656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.774677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.894 [2024-11-27 08:09:35.774685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.786043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.786064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.894 [2024-11-27 08:09:35.786073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.797627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.797649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.894 [2024-11-27 08:09:35.797657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.806335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.806356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.894 [2024-11-27 08:09:35.806364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.816018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.816038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.894 [2024-11-27 08:09:35.816047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.826931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.826964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.894 [2024-11-27 08:09:35.826972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.835118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.835138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.894 [2024-11-27 08:09:35.835146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.847911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.847931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.894 [2024-11-27 08:09:35.847940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.857229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.857250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.894 [2024-11-27 08:09:35.857258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.868010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.868031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.894 [2024-11-27 08:09:35.868039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.876966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.876987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.894 [2024-11-27 08:09:35.876995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.889877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.889899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.894 [2024-11-27 08:09:35.889906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.902160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.902181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.894 [2024-11-27 08:09:35.902189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.915963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.915985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.894 [2024-11-27 08:09:35.915994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.928878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.928900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.894 [2024-11-27 08:09:35.928909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.894 [2024-11-27 08:09:35.941180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.894 [2024-11-27 08:09:35.941202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.895 [2024-11-27 08:09:35.941211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.895 [2024-11-27 08:09:35.952355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.895 [2024-11-27 08:09:35.952376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.895 [2024-11-27 08:09:35.952384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.895 [2024-11-27 08:09:35.961262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.895 [2024-11-27 08:09:35.961284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.895 [2024-11-27 08:09:35.961291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.895 [2024-11-27 08:09:35.974851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.895 [2024-11-27 08:09:35.974873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.895 [2024-11-27 08:09:35.974882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.895 [2024-11-27 08:09:35.986522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.895 [2024-11-27 08:09:35.986545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.895 [2024-11-27 08:09:35.986554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.895 [2024-11-27 08:09:35.996027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:41.895 [2024-11-27 08:09:35.996049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.895 [2024-11-27 08:09:35.996059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.154 [2024-11-27 08:09:36.008756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.154 [2024-11-27 08:09:36.008780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.154 [2024-11-27 08:09:36.008790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.154 [2024-11-27 08:09:36.021393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.154 [2024-11-27 08:09:36.021420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.154 [2024-11-27 08:09:36.021434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.154 [2024-11-27 08:09:36.033162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.154 [2024-11-27 08:09:36.033182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.154 [2024-11-27 08:09:36.033192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.154 [2024-11-27 08:09:36.041686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.154 [2024-11-27 08:09:36.041707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.154 [2024-11-27 08:09:36.041717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.154 [2024-11-27 08:09:36.054674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.154 [2024-11-27 08:09:36.054695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.154 [2024-11-27 08:09:36.054705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.154 [2024-11-27 08:09:36.063335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.154 [2024-11-27 08:09:36.063357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.154 [2024-11-27 08:09:36.063367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.154 [2024-11-27 08:09:36.075723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.154 [2024-11-27 08:09:36.075745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.154 [2024-11-27 08:09:36.075755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.154 [2024-11-27 08:09:36.088290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.154 [2024-11-27 08:09:36.088311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.154 [2024-11-27 08:09:36.088322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.154 [2024-11-27 08:09:36.101167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.154 [2024-11-27 08:09:36.101188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.154 [2024-11-27 08:09:36.101198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.154 [2024-11-27 08:09:36.114216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.154 [2024-11-27 08:09:36.114237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.154 [2024-11-27 08:09:36.114247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.154 [2024-11-27 08:09:36.127289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.155 [2024-11-27 08:09:36.127313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.155 [2024-11-27 08:09:36.127323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.155 [2024-11-27 08:09:36.140178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.155 [2024-11-27 08:09:36.140205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.155 [2024-11-27 08:09:36.140214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.155 [2024-11-27 08:09:36.151883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.155 [2024-11-27 08:09:36.151903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.155 [2024-11-27 08:09:36.151913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.155 [2024-11-27 08:09:36.160573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.155 [2024-11-27 08:09:36.160593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.155 [2024-11-27 08:09:36.160603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.155 [2024-11-27 08:09:36.173612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.155 [2024-11-27 08:09:36.173634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.155 [2024-11-27 08:09:36.173646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.155 [2024-11-27 08:09:36.186549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.155 [2024-11-27 08:09:36.186570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.155 [2024-11-27 08:09:36.186580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.155 [2024-11-27 08:09:36.198039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.155 [2024-11-27 08:09:36.198060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.155 [2024-11-27 08:09:36.198069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.155 [2024-11-27 08:09:36.206609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.155 [2024-11-27 08:09:36.206629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.155 [2024-11-27 08:09:36.206639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.155 [2024-11-27 08:09:36.223521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.155 [2024-11-27 08:09:36.223542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.155 [2024-11-27 08:09:36.223552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.155 [2024-11-27 08:09:36.232652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.155 [2024-11-27 08:09:36.232673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.155 [2024-11-27 08:09:36.232683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.155 [2024-11-27 08:09:36.244443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.155 [2024-11-27 08:09:36.244465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.155 [2024-11-27 08:09:36.244475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.155 [2024-11-27 08:09:36.255969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.155 [2024-11-27 08:09:36.255990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.155 [2024-11-27 08:09:36.255999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.414 [2024-11-27 08:09:36.264801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.414 [2024-11-27 08:09:36.264822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.414 [2024-11-27 08:09:36.264832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.414 [2024-11-27 08:09:36.278049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.414 [2024-11-27 08:09:36.278070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.414 [2024-11-27 08:09:36.278080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.414 [2024-11-27 08:09:36.290771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.414 [2024-11-27 08:09:36.290791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.414 [2024-11-27 08:09:36.290800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.414 [2024-11-27 08:09:36.302417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.414 [2024-11-27 08:09:36.302440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.414 [2024-11-27 08:09:36.302452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.414 [2024-11-27 08:09:36.315550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.414 [2024-11-27 08:09:36.315570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.414 [2024-11-27 08:09:36.315580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.414 [2024-11-27 08:09:36.324274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.414 [2024-11-27 08:09:36.324295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.414 [2024-11-27 08:09:36.324307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.414 [2024-11-27 08:09:36.336625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.414 [2024-11-27 08:09:36.336651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.415 [2024-11-27 08:09:36.336664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.415 [2024-11-27 08:09:36.347220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.415 [2024-11-27 08:09:36.347244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.415 [2024-11-27 08:09:36.347253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.415 [2024-11-27 08:09:36.355776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.415 [2024-11-27 08:09:36.355799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.415 [2024-11-27 08:09:36.355807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.415 [2024-11-27 08:09:36.367717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86c6b0) 00:26:42.415 [2024-11-27 08:09:36.367740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.415 [2024-11-27 08:09:36.367749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.415 23210.50 IOPS, 90.67 MiB/s 00:26:42.415 Latency(us) 00:26:42.415 [2024-11-27T07:09:36.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.415 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:42.415 nvme0n1 : 2.00 23226.61 90.73 0.00 0.00 5505.64 2464.72 19261.89 00:26:42.415 [2024-11-27T07:09:36.524Z] =================================================================================================================== 00:26:42.415 [2024-11-27T07:09:36.524Z] Total : 23226.61 90.73 0.00 0.00 5505.64 2464.72 19261.89 00:26:42.415 { 00:26:42.415 "results": [ 00:26:42.415 { 00:26:42.415 "job": "nvme0n1", 00:26:42.415 "core_mask": "0x2", 00:26:42.415 "workload": "randread", 00:26:42.415 "status": "finished", 00:26:42.415 "queue_depth": 128, 00:26:42.415 "io_size": 4096, 00:26:42.415 "runtime": 2.004856, 00:26:42.415 "iops": 23226.605801114893, 00:26:42.415 "mibps": 90.72892891060505, 00:26:42.415 "io_failed": 0, 00:26:42.415 "io_timeout": 0, 00:26:42.415 "avg_latency_us": 5505.635475482204, 00:26:42.415 "min_latency_us": 2464.7234782608693, 00:26:42.415 "max_latency_us": 19261.885217391304 00:26:42.415 } 00:26:42.415 ], 00:26:42.415 "core_count": 1 00:26:42.415 } 00:26:42.415 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:42.415 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:42.415 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:42.415 | .driver_specific 00:26:42.415 | .nvme_error 00:26:42.415 | .status_code 00:26:42.415 | .command_transient_transport_error' 00:26:42.415 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 182 > 0 )) 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2595582 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2595582 ']' 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2595582 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2595582 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2595582' 00:26:42.674 killing process with pid 2595582 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2595582 00:26:42.674 Received shutdown signal, test time was about 2.000000 seconds 00:26:42.674 00:26:42.674 Latency(us) 00:26:42.674 [2024-11-27T07:09:36.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.674 [2024-11-27T07:09:36.783Z] =================================================================================================================== 00:26:42.674 [2024-11-27T07:09:36.783Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2595582 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2596172 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2596172 /var/tmp/bperf.sock 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2596172 ']' 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:42.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:42.674 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:42.934 08:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:42.934 [2024-11-27 08:09:36.824611] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:26:42.934 [2024-11-27 08:09:36.824661] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2596172 ] 00:26:42.934 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:42.934 Zero copy mechanism will not be used. 00:26:42.934 [2024-11-27 08:09:36.886664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.934 [2024-11-27 08:09:36.929197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.934 08:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:42.934 08:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:42.934 08:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:42.934 08:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:43.193 08:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:43.193 08:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.193 08:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:43.193 08:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.193 08:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:43.193 08:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:43.450 nvme0n1 00:26:43.450 08:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:43.451 08:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.451 08:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:43.451 08:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.451 08:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:43.451 08:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:43.709 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:43.709 Zero copy mechanism will not be used. 00:26:43.709 Running I/O for 2 seconds... 00:26:43.709 [2024-11-27 08:09:37.607857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.709 [2024-11-27 08:09:37.607894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.709 [2024-11-27 08:09:37.607905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.709 [2024-11-27 08:09:37.615843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.709 [2024-11-27 08:09:37.615870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.709 [2024-11-27 08:09:37.615880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.709 [2024-11-27 08:09:37.623296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.709 [2024-11-27 08:09:37.623321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.709 [2024-11-27 08:09:37.623330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.709 [2024-11-27 08:09:37.631709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.709 [2024-11-27 08:09:37.631733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.709 [2024-11-27 08:09:37.631742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.709 [2024-11-27 08:09:37.640037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.709 [2024-11-27 08:09:37.640062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.709 [2024-11-27 08:09:37.640071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.709 [2024-11-27 08:09:37.648966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.709 [2024-11-27 08:09:37.649002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.709 [2024-11-27 08:09:37.649011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.709 [2024-11-27 08:09:37.657240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.709 [2024-11-27 08:09:37.657264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.709 [2024-11-27 08:09:37.657273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.709 [2024-11-27 08:09:37.665738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.709 [2024-11-27 08:09:37.665762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.709 [2024-11-27 08:09:37.665771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.709 [2024-11-27 08:09:37.673685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.709 [2024-11-27 08:09:37.673709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.709 [2024-11-27 08:09:37.673718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.709 [2024-11-27 08:09:37.682646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.709 [2024-11-27 08:09:37.682670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.709 [2024-11-27 08:09:37.682679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.709 [2024-11-27 08:09:37.691249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.709 [2024-11-27 08:09:37.691273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.709 [2024-11-27 08:09:37.691282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.709 [2024-11-27 08:09:37.699705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.709 [2024-11-27 08:09:37.699728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.709 [2024-11-27 08:09:37.699740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.710 [2024-11-27 08:09:37.708240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.710 [2024-11-27 08:09:37.708264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.710 [2024-11-27 08:09:37.708272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.710 [2024-11-27 08:09:37.717018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.710 [2024-11-27 08:09:37.717042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.710 [2024-11-27 08:09:37.717051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.710 [2024-11-27 08:09:37.725450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.710 [2024-11-27 08:09:37.725474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.710 [2024-11-27 08:09:37.725482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.710 [2024-11-27 08:09:37.733289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.710 [2024-11-27 08:09:37.733314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.710 [2024-11-27 08:09:37.733322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.710 [2024-11-27 08:09:37.741137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.710 [2024-11-27 08:09:37.741161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.710 [2024-11-27 08:09:37.741170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.710 [2024-11-27 08:09:37.749035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.710 [2024-11-27 08:09:37.749059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.710 [2024-11-27 08:09:37.749068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.710 [2024-11-27 08:09:37.757371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.710 [2024-11-27 08:09:37.757394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.710 [2024-11-27 08:09:37.757402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.710 [2024-11-27 08:09:37.765162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.710 [2024-11-27 08:09:37.765186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.710 [2024-11-27 08:09:37.765195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.710 [2024-11-27 08:09:37.773960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.710 [2024-11-27 08:09:37.773988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.710 [2024-11-27 08:09:37.773997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.710 [2024-11-27 08:09:37.781601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.710 [2024-11-27 08:09:37.781625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.710 [2024-11-27 08:09:37.781634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.710 [2024-11-27 08:09:37.785793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.710 [2024-11-27 08:09:37.785814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.710 [2024-11-27 08:09:37.785823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.710 [2024-11-27 08:09:37.792822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.710 [2024-11-27 08:09:37.792845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.710 [2024-11-27 08:09:37.792853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.710 [2024-11-27 08:09:37.800275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.710 [2024-11-27 08:09:37.800298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.710 [2024-11-27 08:09:37.800308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.710 [2024-11-27 08:09:37.806768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.710 [2024-11-27 08:09:37.806791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.710 [2024-11-27 08:09:37.806800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.710 [2024-11-27 08:09:37.814772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.710 [2024-11-27 08:09:37.814812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.710 [2024-11-27 08:09:37.814822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.969 [2024-11-27 08:09:37.822930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.969 [2024-11-27 08:09:37.822963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.969 [2024-11-27 08:09:37.822972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.969 [2024-11-27 08:09:37.829859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.969 [2024-11-27 08:09:37.829883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.969 [2024-11-27 08:09:37.829892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.969 [2024-11-27 08:09:37.838007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.969 [2024-11-27 08:09:37.838031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.969 [2024-11-27 08:09:37.838039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.969 [2024-11-27 08:09:37.845229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.969 [2024-11-27 08:09:37.845252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.969 [2024-11-27 08:09:37.845260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.969 [2024-11-27 08:09:37.851851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.969 [2024-11-27 08:09:37.851875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.969 [2024-11-27 08:09:37.851884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.969 [2024-11-27 08:09:37.859170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.969 [2024-11-27 08:09:37.859194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.969 [2024-11-27 08:09:37.859203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.969 [2024-11-27 08:09:37.866381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.969 [2024-11-27 08:09:37.866406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.969 [2024-11-27 08:09:37.866416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.969 [2024-11-27 08:09:37.872902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.969 [2024-11-27 08:09:37.872926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.969 [2024-11-27 08:09:37.872935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.969 [2024-11-27 08:09:37.880035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.969 [2024-11-27 08:09:37.880058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.969 [2024-11-27 08:09:37.880067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.969 [2024-11-27 08:09:37.887782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.969 [2024-11-27 08:09:37.887806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.969 [2024-11-27 08:09:37.887815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.969 [2024-11-27 08:09:37.896678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.969 [2024-11-27 08:09:37.896702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.969 [2024-11-27 08:09:37.896718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.969 [2024-11-27 08:09:37.905430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.969 [2024-11-27 08:09:37.905453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.969 [2024-11-27 08:09:37.905461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.969 [2024-11-27 08:09:37.913403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.969 [2024-11-27 08:09:37.913426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.969 [2024-11-27 08:09:37.913435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.969 [2024-11-27 08:09:37.921845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.969 [2024-11-27 08:09:37.921868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.969 [2024-11-27 08:09:37.921877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.969 [2024-11-27 08:09:37.930503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.969 [2024-11-27 08:09:37.930526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.969 [2024-11-27 08:09:37.930535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.969 [2024-11-27 08:09:37.938853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.969 [2024-11-27 08:09:37.938875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.969 [2024-11-27 08:09:37.938884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.969 [2024-11-27 08:09:37.946791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.970 [2024-11-27 08:09:37.946814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.970 [2024-11-27 08:09:37.946823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.970 [2024-11-27 08:09:37.955182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.970 [2024-11-27 08:09:37.955205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.970 [2024-11-27 08:09:37.955214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.970 [2024-11-27 08:09:37.963011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.970 [2024-11-27 08:09:37.963034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.970 [2024-11-27 08:09:37.963043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.970 [2024-11-27 08:09:37.971287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.970 [2024-11-27 08:09:37.971311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.970 [2024-11-27 08:09:37.971320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.970 [2024-11-27 08:09:37.978829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.970 [2024-11-27 08:09:37.978852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.970 [2024-11-27 08:09:37.978860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.970 [2024-11-27 08:09:37.986698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.970 [2024-11-27 08:09:37.986719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.970 [2024-11-27 08:09:37.986728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.970 [2024-11-27 08:09:37.994547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.970 [2024-11-27 08:09:37.994571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.970 [2024-11-27 08:09:37.994579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.970 [2024-11-27 08:09:38.001943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.970 [2024-11-27 08:09:38.001974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.970 [2024-11-27 08:09:38.001983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.970 [2024-11-27 08:09:38.009833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.970 [2024-11-27 08:09:38.009857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.970 [2024-11-27 08:09:38.009866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.970 [2024-11-27 08:09:38.017228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.970 [2024-11-27 08:09:38.017252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.970 [2024-11-27 08:09:38.017261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.970 [2024-11-27 08:09:38.024828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.970 [2024-11-27 08:09:38.024852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.970 [2024-11-27 08:09:38.024860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.970 [2024-11-27 08:09:38.032392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.970 [2024-11-27 08:09:38.032415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.970 [2024-11-27 08:09:38.032427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.970 [2024-11-27 08:09:38.039088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.970 [2024-11-27 08:09:38.039110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.970 [2024-11-27 08:09:38.039119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.970 [2024-11-27 08:09:38.046450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.970 [2024-11-27 08:09:38.046474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.970 [2024-11-27 08:09:38.046483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.970 [2024-11-27 08:09:38.053299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.970 [2024-11-27 08:09:38.053322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.970 [2024-11-27 08:09:38.053331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.970 [2024-11-27 08:09:38.059964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.970 [2024-11-27 08:09:38.060003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.970 [2024-11-27 08:09:38.060012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.970 [2024-11-27 08:09:38.067241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.970 [2024-11-27 08:09:38.067265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.970 [2024-11-27 08:09:38.067273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.970 [2024-11-27 08:09:38.074015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:43.970 [2024-11-27 08:09:38.074041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.970 [2024-11-27 08:09:38.074050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.230 [2024-11-27 08:09:38.081098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.230 [2024-11-27 08:09:38.081124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.230 [2024-11-27 08:09:38.081134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.230 [2024-11-27 08:09:38.088918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.230 [2024-11-27 08:09:38.088942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.230 [2024-11-27 08:09:38.088957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.230 [2024-11-27 08:09:38.096233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.230 [2024-11-27 08:09:38.096261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.230 [2024-11-27 08:09:38.096269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.230 [2024-11-27 08:09:38.103675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.103699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.103709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.111820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.111842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.111851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.118956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.118980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.118990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.126275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.126299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.126307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.133690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.133714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.133723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.142348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.142373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.142381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.149719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.149753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.149762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.156263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.156287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.156296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.160875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.160897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.160906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.166854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.166877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.166886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.173519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.173542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.173551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.180981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.181004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.181013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.187202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.187226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.187235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.194149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.194173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.194182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.200934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.200964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.200973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.208320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.208345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.208354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.215640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.215663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.215676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.222776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.222800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.222809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.230555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.230578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.230587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.237229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.237252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.231 [2024-11-27 08:09:38.237261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.231 [2024-11-27 08:09:38.245211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.231 [2024-11-27 08:09:38.245234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.232 [2024-11-27 08:09:38.245242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.232 [2024-11-27 08:09:38.253050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.232 [2024-11-27 08:09:38.253073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.232 [2024-11-27 08:09:38.253082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.232 [2024-11-27 08:09:38.260255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.232 [2024-11-27 08:09:38.260277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.232 [2024-11-27 08:09:38.260286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.232 [2024-11-27 08:09:38.267645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.232 [2024-11-27 08:09:38.267669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.232 [2024-11-27 08:09:38.267678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.232 [2024-11-27 08:09:38.274852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.232 [2024-11-27 08:09:38.274876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.232 [2024-11-27 08:09:38.274884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.232 [2024-11-27 08:09:38.282127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.232 [2024-11-27 08:09:38.282155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.232 [2024-11-27 08:09:38.282164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.232 [2024-11-27 08:09:38.289915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.232 [2024-11-27 08:09:38.289937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.232 [2024-11-27 08:09:38.289945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.232 [2024-11-27 08:09:38.296886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.232 [2024-11-27 08:09:38.296909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.232 [2024-11-27 08:09:38.296918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.232 [2024-11-27 08:09:38.303593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.232 [2024-11-27 08:09:38.303616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.232 [2024-11-27 08:09:38.303625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.232 [2024-11-27 08:09:38.311213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.232 [2024-11-27 08:09:38.311236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.232 [2024-11-27 08:09:38.311245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.232 [2024-11-27 08:09:38.317997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.232 [2024-11-27 08:09:38.318019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.232 [2024-11-27 08:09:38.318028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.232 [2024-11-27 08:09:38.325444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.232 [2024-11-27 08:09:38.325467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.232 [2024-11-27 08:09:38.325475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.232 [2024-11-27 08:09:38.332858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.232 [2024-11-27 08:09:38.332881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.232 [2024-11-27 08:09:38.332890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.492 [2024-11-27 08:09:38.339764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.492 [2024-11-27 08:09:38.339791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.492 [2024-11-27 08:09:38.339800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.492 [2024-11-27 08:09:38.346383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.492 [2024-11-27 08:09:38.346407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.492 [2024-11-27 08:09:38.346416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.492 [2024-11-27 08:09:38.353645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.492 [2024-11-27 08:09:38.353669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.492 [2024-11-27 08:09:38.353678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.492 [2024-11-27 08:09:38.360720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.492 [2024-11-27 08:09:38.360744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.492 [2024-11-27 08:09:38.360752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.492 [2024-11-27 08:09:38.367003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.492 [2024-11-27 08:09:38.367027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.492 [2024-11-27 08:09:38.367035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.492 [2024-11-27 08:09:38.374395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.492 [2024-11-27 08:09:38.374421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.492 [2024-11-27 08:09:38.374432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.492 [2024-11-27 08:09:38.381352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.492 [2024-11-27 08:09:38.381378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.492 [2024-11-27 08:09:38.381387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.492 [2024-11-27 08:09:38.388544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.492 [2024-11-27 08:09:38.388569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.492 [2024-11-27 08:09:38.388578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.492 [2024-11-27 08:09:38.395727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.492 [2024-11-27 08:09:38.395751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.492 [2024-11-27 08:09:38.395760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.492 [2024-11-27 08:09:38.401661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.492 [2024-11-27 08:09:38.401684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.492 [2024-11-27 08:09:38.401697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.492 [2024-11-27 08:09:38.407899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.492 [2024-11-27 08:09:38.407923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.492 [2024-11-27 08:09:38.407931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.492 [2024-11-27 08:09:38.415467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.415491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.415500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.422871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.422894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.422902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.429776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.429801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.429810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.436344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.436366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.436375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.442816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.442839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.442847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.449891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.449915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.449923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.456735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.456758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.456766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.463227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.463249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.463258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.470569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.470593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.470601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.477973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.477996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.478005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.485303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.485326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.485335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.493199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.493222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.493231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.501533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.501557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.501565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.509702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.509725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.509734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.516887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.516910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.516919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.524318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.524341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.524353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.531150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.531172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.531181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.538085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.538108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.538116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.544723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.544745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.544753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.549347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.549368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.549376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.556322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.556345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.556353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.562998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.493 [2024-11-27 08:09:38.563020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.493 [2024-11-27 08:09:38.563028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.493 [2024-11-27 08:09:38.569011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.494 [2024-11-27 08:09:38.569033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.494 [2024-11-27 08:09:38.569042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.494 [2024-11-27 08:09:38.576341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.494 [2024-11-27 08:09:38.576364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.494 [2024-11-27 08:09:38.576373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.494 [2024-11-27 08:09:38.582469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.494 [2024-11-27 08:09:38.582494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.494 [2024-11-27 08:09:38.582502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.494 [2024-11-27 08:09:38.589312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.494 [2024-11-27 08:09:38.589335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.494 [2024-11-27 08:09:38.589344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.494 [2024-11-27 08:09:38.596788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.494 [2024-11-27 08:09:38.596812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.494 [2024-11-27 08:09:38.596821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.754 4170.00 IOPS, 521.25 MiB/s [2024-11-27T07:09:38.863Z] [2024-11-27 08:09:38.604592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.754 [2024-11-27 08:09:38.604615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.754 [2024-11-27 08:09:38.604624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.754 [2024-11-27 08:09:38.612076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.754 [2024-11-27 08:09:38.612099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.754 [2024-11-27 08:09:38.612108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.754 [2024-11-27 08:09:38.619181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.754 [2024-11-27 08:09:38.619203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.754 [2024-11-27 08:09:38.619212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.754 [2024-11-27 08:09:38.626668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.754 [2024-11-27 08:09:38.626693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.754 [2024-11-27 08:09:38.626705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.754 [2024-11-27 08:09:38.633169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.754 [2024-11-27 08:09:38.633192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.754 [2024-11-27 08:09:38.633201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.754 [2024-11-27 08:09:38.640177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.754 [2024-11-27 08:09:38.640200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.754 [2024-11-27 08:09:38.640209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.754 [2024-11-27 08:09:38.646831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.754 [2024-11-27 08:09:38.646855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.754 [2024-11-27 08:09:38.646863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.754 [2024-11-27 08:09:38.653661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.754 [2024-11-27 08:09:38.653685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.754 [2024-11-27 08:09:38.653693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.754 [2024-11-27 08:09:38.660820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.754 [2024-11-27 08:09:38.660842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.754 [2024-11-27 08:09:38.660851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.754 [2024-11-27 08:09:38.667816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.754 [2024-11-27 08:09:38.667838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.754 [2024-11-27 08:09:38.667847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.754 [2024-11-27 08:09:38.676055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.754 [2024-11-27 08:09:38.676078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.754 [2024-11-27 08:09:38.676087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.754 [2024-11-27 08:09:38.683866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.754 [2024-11-27 08:09:38.683890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.754 [2024-11-27 08:09:38.683899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.754 [2024-11-27 08:09:38.691753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.754 [2024-11-27 08:09:38.691777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.754 [2024-11-27 08:09:38.691785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.754 [2024-11-27 08:09:38.699642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.754 [2024-11-27 08:09:38.699663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.754 [2024-11-27 08:09:38.699672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.754 [2024-11-27 08:09:38.707375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.754 [2024-11-27 08:09:38.707397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.754 [2024-11-27 08:09:38.707411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.754 [2024-11-27 08:09:38.716090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.754 [2024-11-27 08:09:38.716112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.754 [2024-11-27 08:09:38.716121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.754 [2024-11-27 08:09:38.723813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.754 [2024-11-27 08:09:38.723835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.754 [2024-11-27 08:09:38.723843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.754 [2024-11-27 08:09:38.731064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.754 [2024-11-27 08:09:38.731086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.731095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.755 [2024-11-27 08:09:38.738917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.755 [2024-11-27 08:09:38.738939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.738954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.755 [2024-11-27 08:09:38.745308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.755 [2024-11-27 08:09:38.745330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.745338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.755 [2024-11-27 08:09:38.752659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.755 [2024-11-27 08:09:38.752681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.752690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.755 [2024-11-27 08:09:38.759214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.755 [2024-11-27 08:09:38.759237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.759246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.755 [2024-11-27 08:09:38.765769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.755 [2024-11-27 08:09:38.765791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.765799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.755 [2024-11-27 08:09:38.772015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.755 [2024-11-27 08:09:38.772037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.772046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.755 [2024-11-27 08:09:38.778655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.755 [2024-11-27 08:09:38.778678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.778687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.755 [2024-11-27 08:09:38.786080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.755 [2024-11-27 08:09:38.786102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.786112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.755 [2024-11-27 08:09:38.793494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.755 [2024-11-27 08:09:38.793517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.793525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.755 [2024-11-27 08:09:38.800786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.755 [2024-11-27 08:09:38.800808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.800816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.755 [2024-11-27 08:09:38.808187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.755 [2024-11-27 08:09:38.808210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.808218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.755 [2024-11-27 08:09:38.815823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.755 [2024-11-27 08:09:38.815846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.815855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.755 [2024-11-27 08:09:38.823464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.755 [2024-11-27 08:09:38.823486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.823494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.755 [2024-11-27 08:09:38.830381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.755 [2024-11-27 08:09:38.830403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.830418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.755 [2024-11-27 08:09:38.837531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.755 [2024-11-27 08:09:38.837555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.837564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.755 [2024-11-27 08:09:38.844960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.755 [2024-11-27 08:09:38.844982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.844991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.755 [2024-11-27 08:09:38.852610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.755 [2024-11-27 08:09:38.852633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.852642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.755 [2024-11-27 08:09:38.861345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:44.755 [2024-11-27 08:09:38.861369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.755 [2024-11-27 08:09:38.861379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.015 [2024-11-27 08:09:38.869474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.015 [2024-11-27 08:09:38.869499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.015 [2024-11-27 08:09:38.869508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.015 [2024-11-27 08:09:38.877801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.015 [2024-11-27 08:09:38.877825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.015 [2024-11-27 08:09:38.877836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.015 [2024-11-27 08:09:38.885975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.015 [2024-11-27 08:09:38.885999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.015 [2024-11-27 08:09:38.886008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.015 [2024-11-27 08:09:38.893852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.015 [2024-11-27 08:09:38.893875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.015 [2024-11-27 08:09:38.893884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.015 [2024-11-27 08:09:38.902924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.015 [2024-11-27 08:09:38.902960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.015 [2024-11-27 08:09:38.902970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.015 [2024-11-27 08:09:38.911417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.015 [2024-11-27 08:09:38.911442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.015 [2024-11-27 08:09:38.911450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.015 [2024-11-27 08:09:38.920021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.015 [2024-11-27 08:09:38.920044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.015 [2024-11-27 08:09:38.920052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.015 [2024-11-27 08:09:38.927710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.015 [2024-11-27 08:09:38.927733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.015 [2024-11-27 08:09:38.927741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.015 [2024-11-27 08:09:38.935037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.015 [2024-11-27 08:09:38.935060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.015 [2024-11-27 08:09:38.935069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.015 [2024-11-27 08:09:38.942764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.015 [2024-11-27 08:09:38.942788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:38.942798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:38.950191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:38.950214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:38.950223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:38.957976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:38.957999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:38.958008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:38.966021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:38.966044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:38.966053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:38.974566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:38.974590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:38.974599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:38.982442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:38.982465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:38.982474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:38.990286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:38.990310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:38.990318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:38.997527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:38.997550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:38.997558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:39.004055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:39.004077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:39.004085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:39.011588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:39.011611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:39.011619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:39.019197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:39.019221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:39.019230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:39.026626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:39.026648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:39.026656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:39.034016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:39.034038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:39.034051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:39.042318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:39.042342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:39.042350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:39.051073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:39.051096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:39.051104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:39.059359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:39.059382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:39.059391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:39.068061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:39.068084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:39.068092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:39.075375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:39.075398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:39.075407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:39.083291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:39.083314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:39.083323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:39.090890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:39.090914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:39.090923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:39.098173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.016 [2024-11-27 08:09:39.098196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.016 [2024-11-27 08:09:39.098205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.016 [2024-11-27 08:09:39.104410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.017 [2024-11-27 08:09:39.104436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.017 [2024-11-27 08:09:39.104445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.017 [2024-11-27 08:09:39.112218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.017 [2024-11-27 08:09:39.112239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.017 [2024-11-27 08:09:39.112248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.017 [2024-11-27 08:09:39.119504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.017 [2024-11-27 08:09:39.119529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.017 [2024-11-27 08:09:39.119538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.277 [2024-11-27 08:09:39.126856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.277 [2024-11-27 08:09:39.126883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.277 [2024-11-27 08:09:39.126893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.277 [2024-11-27 08:09:39.134523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.277 [2024-11-27 08:09:39.134547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.277 [2024-11-27 08:09:39.134556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.277 [2024-11-27 08:09:39.142176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.277 [2024-11-27 08:09:39.142201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.277 [2024-11-27 08:09:39.142210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.277 [2024-11-27 08:09:39.150004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.277 [2024-11-27 08:09:39.150027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.277 [2024-11-27 08:09:39.150035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.277 [2024-11-27 08:09:39.157680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.277 [2024-11-27 08:09:39.157703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.277 [2024-11-27 08:09:39.157711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.277 [2024-11-27 08:09:39.164685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.277 [2024-11-27 08:09:39.164707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.277 [2024-11-27 08:09:39.164716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.277 [2024-11-27 08:09:39.171546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.277 [2024-11-27 08:09:39.171569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.277 [2024-11-27 08:09:39.171577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.277 [2024-11-27 08:09:39.179706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.277 [2024-11-27 08:09:39.179729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.277 [2024-11-27 08:09:39.179738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.277 [2024-11-27 08:09:39.187690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.277 [2024-11-27 08:09:39.187713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.277 [2024-11-27 08:09:39.187721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.277 [2024-11-27 08:09:39.195866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.277 [2024-11-27 08:09:39.195888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.277 [2024-11-27 08:09:39.195897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.277 [2024-11-27 08:09:39.203560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.277 [2024-11-27 08:09:39.203582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.277 [2024-11-27 08:09:39.203591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.277 [2024-11-27 08:09:39.210608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.277 [2024-11-27 08:09:39.210631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.277 [2024-11-27 08:09:39.210640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.277 [2024-11-27 08:09:39.218172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.277 [2024-11-27 08:09:39.218196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.277 [2024-11-27 08:09:39.218204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.277 [2024-11-27 08:09:39.226209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.277 [2024-11-27 08:09:39.226232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.277 [2024-11-27 08:09:39.226240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.277 [2024-11-27 08:09:39.233796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.277 [2024-11-27 08:09:39.233824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.277 [2024-11-27 08:09:39.233833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.277 [2024-11-27 08:09:39.240496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.277 [2024-11-27 08:09:39.240520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.277 [2024-11-27 08:09:39.240529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.277 [2024-11-27 08:09:39.247553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.277 [2024-11-27 08:09:39.247576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.277 [2024-11-27 08:09:39.247585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.277 [2024-11-27 08:09:39.254832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.278 [2024-11-27 08:09:39.254855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.278 [2024-11-27 08:09:39.254864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.278 [2024-11-27 08:09:39.261921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.278 [2024-11-27 08:09:39.261944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.278 [2024-11-27 08:09:39.261960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.278 [2024-11-27 08:09:39.270103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.278 [2024-11-27 08:09:39.270126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.278 [2024-11-27 08:09:39.270135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.278 [2024-11-27 08:09:39.278474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.278 [2024-11-27 08:09:39.278498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.278 [2024-11-27 08:09:39.278506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.278 [2024-11-27 08:09:39.287632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.278 [2024-11-27 08:09:39.287655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.278 [2024-11-27 08:09:39.287664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.278 [2024-11-27 08:09:39.296542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.278 [2024-11-27 08:09:39.296565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.278 [2024-11-27 08:09:39.296573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.278 [2024-11-27 08:09:39.304633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.278 [2024-11-27 08:09:39.304656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.278 [2024-11-27 08:09:39.304665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.278 [2024-11-27 08:09:39.312522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.278 [2024-11-27 08:09:39.312545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.278 [2024-11-27 08:09:39.312554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.278 [2024-11-27 08:09:39.320563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.278 [2024-11-27 08:09:39.320587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.278 [2024-11-27 08:09:39.320595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.278 [2024-11-27 08:09:39.327743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.278 [2024-11-27 08:09:39.327767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.278 [2024-11-27 08:09:39.327776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.278 [2024-11-27 08:09:39.335845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.278 [2024-11-27 08:09:39.335869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.278 [2024-11-27 08:09:39.335877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.278 [2024-11-27 08:09:39.344040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.278 [2024-11-27 08:09:39.344063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.278 [2024-11-27 08:09:39.344071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.278 [2024-11-27 08:09:39.353011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.278 [2024-11-27 08:09:39.353035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.278 [2024-11-27 08:09:39.353043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.278 [2024-11-27 08:09:39.361211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.278 [2024-11-27 08:09:39.361234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.278 [2024-11-27 08:09:39.361243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.278 [2024-11-27 08:09:39.369779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.278 [2024-11-27 08:09:39.369802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.278 [2024-11-27 08:09:39.369815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.278 [2024-11-27 08:09:39.377179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.278 [2024-11-27 08:09:39.377203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.278 [2024-11-27 08:09:39.377213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.537 [2024-11-27 08:09:39.384993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.537 [2024-11-27 08:09:39.385018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.537 [2024-11-27 08:09:39.385027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.537 [2024-11-27 08:09:39.393582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.393605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.393614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.401402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.401426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.401436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.409892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.409916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.409925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.418859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.418884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.418892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.427428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.427452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.427461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.435746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.435770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.435778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.444276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.444306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.444315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.453256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.453281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.453291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.462279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.462303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.462312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.471354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.471379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.471388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.479232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.479256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.479265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.488131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.488155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.488164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.496079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.496102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.496111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.503953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.503977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.503985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.511906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.511929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.511938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.518684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.518707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.518716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.526262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.526285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.526294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.534542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.534565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.534573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.542968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.542991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.543000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.550728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.550752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.550761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.538 [2024-11-27 08:09:39.558282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.538 [2024-11-27 08:09:39.558305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.538 [2024-11-27 08:09:39.558313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.539 [2024-11-27 08:09:39.565999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.539 [2024-11-27 08:09:39.566022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.539 [2024-11-27 08:09:39.566030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.539 [2024-11-27 08:09:39.573008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.539 [2024-11-27 08:09:39.573031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.539 [2024-11-27 08:09:39.573040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.539 [2024-11-27 08:09:39.579833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.539 [2024-11-27 08:09:39.579857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.539 [2024-11-27 08:09:39.579869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.539 [2024-11-27 08:09:39.587012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.539 [2024-11-27 08:09:39.587034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.539 [2024-11-27 08:09:39.587043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.539 [2024-11-27 08:09:39.593682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.539 [2024-11-27 08:09:39.593705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.539 [2024-11-27 08:09:39.593714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.539 [2024-11-27 08:09:39.600790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c381a0) 00:26:45.539 [2024-11-27 08:09:39.600813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.539 [2024-11-27 08:09:39.600822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.539 4097.50 IOPS, 512.19 MiB/s 00:26:45.539 Latency(us) 00:26:45.539 [2024-11-27T07:09:39.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.539 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:45.539 nvme0n1 : 2.00 4100.36 512.55 0.00 0.00 3899.38 1025.78 13848.04 00:26:45.539 [2024-11-27T07:09:39.648Z] =================================================================================================================== 00:26:45.539 [2024-11-27T07:09:39.648Z] Total : 4100.36 512.55 0.00 0.00 3899.38 1025.78 13848.04 00:26:45.539 { 00:26:45.539 "results": [ 00:26:45.539 { 00:26:45.539 "job": "nvme0n1", 00:26:45.539 "core_mask": "0x2", 00:26:45.539 "workload": "randread", 00:26:45.539 "status": "finished", 00:26:45.539 "queue_depth": 16, 00:26:45.539 "io_size": 131072, 00:26:45.539 "runtime": 2.002506, 00:26:45.539 "iops": 4100.362246105629, 00:26:45.539 "mibps": 512.5452807632037, 00:26:45.539 "io_failed": 0, 00:26:45.539 "io_timeout": 0, 00:26:45.539 "avg_latency_us": 3899.376470800041, 00:26:45.539 "min_latency_us": 1025.7808695652175, 00:26:45.539 "max_latency_us": 13848.041739130435 00:26:45.539 } 00:26:45.539 ], 00:26:45.539 "core_count": 1 00:26:45.539 } 00:26:45.539 08:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:45.539 08:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:45.539 08:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:45.539 08:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:45.539 | .driver_specific 00:26:45.539 | .nvme_error 00:26:45.539 | .status_code 00:26:45.539 | .command_transient_transport_error' 00:26:45.799 08:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 265 > 0 )) 00:26:45.799 08:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2596172 00:26:45.799 08:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2596172 ']' 00:26:45.799 08:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2596172 00:26:45.799 08:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:45.799 08:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:45.799 08:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2596172 00:26:45.799 08:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:45.799 08:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:45.799 08:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2596172' 00:26:45.799 killing process with pid 2596172 00:26:45.799 08:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2596172 00:26:45.799 Received shutdown signal, test time was about 2.000000 seconds 00:26:45.799 00:26:45.799 Latency(us) 00:26:45.799 [2024-11-27T07:09:39.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.799 [2024-11-27T07:09:39.908Z] =================================================================================================================== 00:26:45.799 [2024-11-27T07:09:39.908Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:45.799 08:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2596172 00:26:46.058 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:46.058 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:46.058 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:46.058 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:46.058 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:46.058 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2596741 00:26:46.058 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2596741 /var/tmp/bperf.sock 00:26:46.058 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:46.058 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2596741 ']' 00:26:46.058 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:46.058 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:46.058 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:46.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:46.058 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:46.058 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:46.058 [2024-11-27 08:09:40.087785] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:26:46.058 [2024-11-27 08:09:40.087839] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2596741 ] 00:26:46.058 [2024-11-27 08:09:40.146170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.316 [2024-11-27 08:09:40.190700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.316 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.316 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:46.316 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:46.316 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:46.575 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:46.575 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.575 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:46.575 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.575 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.575 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.837 nvme0n1 00:26:47.095 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:47.095 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.095 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:47.095 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.095 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:47.096 08:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:47.096 Running I/O for 2 seconds... 00:26:47.096 [2024-11-27 08:09:41.079898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee5220 00:26:47.096 [2024-11-27 08:09:41.080771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.096 [2024-11-27 08:09:41.080802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.096 [2024-11-27 08:09:41.089700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef81e0 00:26:47.096 [2024-11-27 08:09:41.090534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.096 [2024-11-27 08:09:41.090556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.096 [2024-11-27 08:09:41.099446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee7c50 00:26:47.096 [2024-11-27 08:09:41.100270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.096 [2024-11-27 08:09:41.100291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:47.096 [2024-11-27 08:09:41.108277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef4f40 00:26:47.096 [2024-11-27 08:09:41.109082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.096 [2024-11-27 08:09:41.109102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.096 [2024-11-27 08:09:41.119792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef4f40 00:26:47.096 [2024-11-27 08:09:41.121115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.096 [2024-11-27 08:09:41.121134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:47.096 [2024-11-27 08:09:41.128154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee4140 00:26:47.096 [2024-11-27 08:09:41.128946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.096 [2024-11-27 08:09:41.128969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:47.096 [2024-11-27 08:09:41.137716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef92c0 00:26:47.096 [2024-11-27 08:09:41.138515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.096 [2024-11-27 08:09:41.138534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:47.096 [2024-11-27 08:09:41.147418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee8d30 00:26:47.096 [2024-11-27 08:09:41.148224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.096 [2024-11-27 08:09:41.148243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:47.096 [2024-11-27 08:09:41.156238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef3e60 00:26:47.096 [2024-11-27 08:09:41.157009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.096 [2024-11-27 08:09:41.157027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:47.096 [2024-11-27 08:09:41.165792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee95a0 00:26:47.096 [2024-11-27 08:09:41.166559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.096 [2024-11-27 08:09:41.166578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:47.096 [2024-11-27 08:09:41.177234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee95a0 00:26:47.096 [2024-11-27 08:09:41.178515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.096 [2024-11-27 08:09:41.178533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:47.096 [2024-11-27 08:09:41.185718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efa3a0 00:26:47.096 [2024-11-27 08:09:41.186478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.096 [2024-11-27 08:09:41.186497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:47.096 [2024-11-27 08:09:41.195257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee27f0 00:26:47.096 [2024-11-27 08:09:41.196013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.096 [2024-11-27 08:09:41.196033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:47.356 [2024-11-27 08:09:41.205403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef7538 00:26:47.356 [2024-11-27 08:09:41.206035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.356 [2024-11-27 08:09:41.206060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:47.356 [2024-11-27 08:09:41.215390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee6738 00:26:47.356 [2024-11-27 08:09:41.216264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.356 [2024-11-27 08:09:41.216285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:47.356 [2024-11-27 08:09:41.224212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef2510 00:26:47.356 [2024-11-27 08:09:41.225157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.356 [2024-11-27 08:09:41.225177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:47.356 [2024-11-27 08:09:41.234241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efda78 00:26:47.356 [2024-11-27 08:09:41.235295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.356 [2024-11-27 08:09:41.235315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:47.356 [2024-11-27 08:09:41.244426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee5658 00:26:47.356 [2024-11-27 08:09:41.245620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.356 [2024-11-27 08:09:41.245640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:47.356 [2024-11-27 08:09:41.254575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eeaef0 00:26:47.356 [2024-11-27 08:09:41.255907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.356 [2024-11-27 08:09:41.255925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:47.356 [2024-11-27 08:09:41.264665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee0630 00:26:47.356 [2024-11-27 08:09:41.266044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.356 [2024-11-27 08:09:41.266063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:47.356 [2024-11-27 08:09:41.273009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee95a0 00:26:47.356 [2024-11-27 08:09:41.273866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.356 [2024-11-27 08:09:41.273884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:47.356 [2024-11-27 08:09:41.281942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee0630 00:26:47.356 [2024-11-27 08:09:41.282850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.356 [2024-11-27 08:09:41.282872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:47.356 [2024-11-27 08:09:41.293447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee0630 00:26:47.357 [2024-11-27 08:09:41.294885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.357 [2024-11-27 08:09:41.294905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:47.357 [2024-11-27 08:09:41.303464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eeaef0 00:26:47.357 [2024-11-27 08:09:41.305024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.357 [2024-11-27 08:09:41.305043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:47.357 [2024-11-27 08:09:41.313492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efda78 00:26:47.357 [2024-11-27 08:09:41.315192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.357 [2024-11-27 08:09:41.315211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:47.357 [2024-11-27 08:09:41.320310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef0bc0 00:26:47.357 [2024-11-27 08:09:41.321116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.357 [2024-11-27 08:09:41.321135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:47.357 [2024-11-27 08:09:41.330359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef3e60 00:26:47.357 [2024-11-27 08:09:41.331302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.357 [2024-11-27 08:09:41.331320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.357 [2024-11-27 08:09:41.340152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef4298 00:26:47.357 [2024-11-27 08:09:41.341252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.357 [2024-11-27 08:09:41.341270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:47.357 [2024-11-27 08:09:41.350202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee38d0 00:26:47.357 [2024-11-27 08:09:41.351384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.357 [2024-11-27 08:09:41.351414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:47.357 [2024-11-27 08:09:41.360294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee5658 00:26:47.357 [2024-11-27 08:09:41.361606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.357 [2024-11-27 08:09:41.361624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:47.357 [2024-11-27 08:09:41.370293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee6300 00:26:47.357 [2024-11-27 08:09:41.371740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.357 [2024-11-27 08:09:41.371762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:47.357 [2024-11-27 08:09:41.380342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eed920 00:26:47.357 [2024-11-27 08:09:41.381927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.357 [2024-11-27 08:09:41.381951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:47.357 [2024-11-27 08:09:41.390473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efbcf0 00:26:47.357 [2024-11-27 08:09:41.392170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.357 [2024-11-27 08:09:41.392189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:47.357 [2024-11-27 08:09:41.397262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee12d8 00:26:47.357 [2024-11-27 08:09:41.398059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.357 [2024-11-27 08:09:41.398077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:47.357 [2024-11-27 08:09:41.407276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efa7d8 00:26:47.357 [2024-11-27 08:09:41.408219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.357 [2024-11-27 08:09:41.408237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.357 [2024-11-27 08:09:41.416328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee8d30 00:26:47.357 [2024-11-27 08:09:41.417244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.357 [2024-11-27 08:09:41.417262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:47.357 [2024-11-27 08:09:41.426402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef4298 00:26:47.357 [2024-11-27 08:09:41.427468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.357 [2024-11-27 08:09:41.427486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:47.357 [2024-11-27 08:09:41.436704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eeb328 00:26:47.357 [2024-11-27 08:09:41.437923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.357 [2024-11-27 08:09:41.437941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:47.357 [2024-11-27 08:09:41.447141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef6890 00:26:47.357 [2024-11-27 08:09:41.448453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.357 [2024-11-27 08:09:41.448471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:47.357 [2024-11-27 08:09:41.457225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee0ea0 00:26:47.357 [2024-11-27 08:09:41.458697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.357 [2024-11-27 08:09:41.458715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.467628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eebb98 00:26:47.617 [2024-11-27 08:09:41.469248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.469269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.477757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef7da8 00:26:47.617 [2024-11-27 08:09:41.479459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.479478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.484611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef2510 00:26:47.617 [2024-11-27 08:09:41.485425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.485444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.494372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee1710 00:26:47.617 [2024-11-27 08:09:41.495205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.495225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.503879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ede470 00:26:47.617 [2024-11-27 08:09:41.504605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.504625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.513509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef6458 00:26:47.617 [2024-11-27 08:09:41.514240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.514258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.523097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eee190 00:26:47.617 [2024-11-27 08:09:41.523820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.523838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.532720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef96f8 00:26:47.617 [2024-11-27 08:09:41.533448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.533466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.541630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef6020 00:26:47.617 [2024-11-27 08:09:41.542443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.542461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.551567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eee5c8 00:26:47.617 [2024-11-27 08:09:41.552507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.552525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.561648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef4298 00:26:47.617 [2024-11-27 08:09:41.562708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.562727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.571677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eef270 00:26:47.617 [2024-11-27 08:09:41.572860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.572878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.581692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee6b70 00:26:47.617 [2024-11-27 08:09:41.583037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.583054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.592048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efb048 00:26:47.617 [2024-11-27 08:09:41.593497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.593515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.602081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef1430 00:26:47.617 [2024-11-27 08:09:41.603651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.603669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.612027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efef90 00:26:47.617 [2024-11-27 08:09:41.613718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.613736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.618835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef0350 00:26:47.617 [2024-11-27 08:09:41.619645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.619666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.628360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eeff18 00:26:47.617 [2024-11-27 08:09:41.629298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.629317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.638391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eedd58 00:26:47.617 [2024-11-27 08:09:41.639454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.639472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.648486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efa7d8 00:26:47.617 [2024-11-27 08:09:41.649672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.649690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.658508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eebb98 00:26:47.617 [2024-11-27 08:09:41.659820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.659838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.668447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eeee38 00:26:47.617 [2024-11-27 08:09:41.669880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.669898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.678546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efc560 00:26:47.617 [2024-11-27 08:09:41.680128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.680148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.688707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee8088 00:26:47.617 [2024-11-27 08:09:41.690397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.617 [2024-11-27 08:09:41.690415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 08:09:41.695508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efe2e8 00:26:47.617 [2024-11-27 08:09:41.696294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.618 [2024-11-27 08:09:41.696313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 08:09:41.705227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee73e0 00:26:47.618 [2024-11-27 08:09:41.706038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.618 [2024-11-27 08:09:41.706056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 08:09:41.714802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef1430 00:26:47.618 [2024-11-27 08:09:41.715634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.618 [2024-11-27 08:09:41.715652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 08:09:41.724565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eebfd0 00:26:47.878 [2024-11-27 08:09:41.725412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.878 [2024-11-27 08:09:41.725435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.878 [2024-11-27 08:09:41.734320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016edf118 00:26:47.878 [2024-11-27 08:09:41.735155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.878 [2024-11-27 08:09:41.735177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.878 [2024-11-27 08:09:41.744093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee01f8 00:26:47.878 [2024-11-27 08:09:41.744898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.878 [2024-11-27 08:09:41.744918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.753768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee4578 00:26:47.879 [2024-11-27 08:09:41.754607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.754626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.763381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef6458 00:26:47.879 [2024-11-27 08:09:41.764212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.764230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.772939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efda78 00:26:47.879 [2024-11-27 08:09:41.773746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.773764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.782609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef0350 00:26:47.879 [2024-11-27 08:09:41.783446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.783465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.792238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efdeb0 00:26:47.879 [2024-11-27 08:09:41.793040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.793059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.801800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ede8a8 00:26:47.879 [2024-11-27 08:09:41.802631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.802649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.811385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee6738 00:26:47.879 [2024-11-27 08:09:41.812193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.812212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.820977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef8618 00:26:47.879 [2024-11-27 08:09:41.821780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.821798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.830594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef9b30 00:26:47.879 [2024-11-27 08:09:41.831422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.831441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.840158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eef270 00:26:47.879 [2024-11-27 08:09:41.840988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.841006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.850031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee5220 00:26:47.879 [2024-11-27 08:09:41.850859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.850877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.859646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef57b0 00:26:47.879 [2024-11-27 08:09:41.860477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.860495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.869204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef35f0 00:26:47.879 [2024-11-27 08:09:41.870009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.870030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.878779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eec408 00:26:47.879 [2024-11-27 08:09:41.879609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.879627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.888428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee23b8 00:26:47.879 [2024-11-27 08:09:41.889262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.889280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.897992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016edf550 00:26:47.879 [2024-11-27 08:09:41.898796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.898813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.907576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee0630 00:26:47.879 [2024-11-27 08:09:41.908406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.908424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.917171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee38d0 00:26:47.879 [2024-11-27 08:09:41.917980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.917998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.926736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee99d8 00:26:47.879 [2024-11-27 08:09:41.927552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.927571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.936342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efd640 00:26:47.879 [2024-11-27 08:09:41.937182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.937200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.945919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efc998 00:26:47.879 [2024-11-27 08:09:41.946768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.879 [2024-11-27 08:09:41.946787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.879 [2024-11-27 08:09:41.955576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efb480 00:26:47.879 [2024-11-27 08:09:41.956408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.880 [2024-11-27 08:09:41.956426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.880 [2024-11-27 08:09:41.965129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efc128 00:26:47.880 [2024-11-27 08:09:41.965960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.880 [2024-11-27 08:09:41.965978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.880 [2024-11-27 08:09:41.974686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef7da8 00:26:47.880 [2024-11-27 08:09:41.975504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.880 [2024-11-27 08:09:41.975522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:47.880 [2024-11-27 08:09:41.984362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eff3c8 00:26:47.880 [2024-11-27 08:09:41.985218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.880 [2024-11-27 08:09:41.985238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:48.139 [2024-11-27 08:09:41.994341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef0788 00:26:48.139 [2024-11-27 08:09:41.995177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.139 [2024-11-27 08:09:41.995198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:48.139 [2024-11-27 08:09:42.003932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eef6a8 00:26:48.139 [2024-11-27 08:09:42.004773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.139 [2024-11-27 08:09:42.004792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:48.139 [2024-11-27 08:09:42.013519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efe2e8 00:26:48.139 [2024-11-27 08:09:42.014325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.139 [2024-11-27 08:09:42.014344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:48.139 [2024-11-27 08:09:42.023113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee73e0 00:26:48.139 [2024-11-27 08:09:42.023949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.139 [2024-11-27 08:09:42.023968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:48.139 [2024-11-27 08:09:42.032724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef1430 00:26:48.139 [2024-11-27 08:09:42.033556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.139 [2024-11-27 08:09:42.033574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:48.139 [2024-11-27 08:09:42.042274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eebfd0 00:26:48.139 [2024-11-27 08:09:42.043110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.139 [2024-11-27 08:09:42.043129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.051893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016edf118 00:26:48.140 [2024-11-27 08:09:42.052722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.052741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.061536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eea680 00:26:48.140 [2024-11-27 08:09:42.062256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.062274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:48.140 26378.00 IOPS, 103.04 MiB/s [2024-11-27T07:09:42.249Z] [2024-11-27 08:09:42.070914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eea680 00:26:48.140 [2024-11-27 08:09:42.071725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.071744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.080575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eee190 00:26:48.140 [2024-11-27 08:09:42.081360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.081378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.090223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eee5c8 00:26:48.140 [2024-11-27 08:09:42.091013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.091032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.100077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee12d8 00:26:48.140 [2024-11-27 08:09:42.100889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.100908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.109752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef2948 00:26:48.140 [2024-11-27 08:09:42.110566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.110585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.119318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eed4e8 00:26:48.140 [2024-11-27 08:09:42.120096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.120118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.128966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee0ea0 00:26:48.140 [2024-11-27 08:09:42.129771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.129789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.138620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee01f8 00:26:48.140 [2024-11-27 08:09:42.139414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.139432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.148030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee01f8 00:26:48.140 [2024-11-27 08:09:42.148811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.148830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.157600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee01f8 00:26:48.140 [2024-11-27 08:09:42.158391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.158409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.167136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee01f8 00:26:48.140 [2024-11-27 08:09:42.167926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.167944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.176723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee01f8 00:26:48.140 [2024-11-27 08:09:42.177511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.177529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.186328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee01f8 00:26:48.140 [2024-11-27 08:09:42.187118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.187136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.195838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee01f8 00:26:48.140 [2024-11-27 08:09:42.196625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.196643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.205364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee01f8 00:26:48.140 [2024-11-27 08:09:42.206178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.206196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.214905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee01f8 00:26:48.140 [2024-11-27 08:09:42.215691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.215709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.224446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee01f8 00:26:48.140 [2024-11-27 08:09:42.225235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.225253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.233986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee01f8 00:26:48.140 [2024-11-27 08:09:42.234767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.234786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.140 [2024-11-27 08:09:42.243571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee01f8 00:26:48.140 [2024-11-27 08:09:42.244431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.140 [2024-11-27 08:09:42.244466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.399 [2024-11-27 08:09:42.253619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee0ea0 00:26:48.399 [2024-11-27 08:09:42.254425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.399 [2024-11-27 08:09:42.254445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:48.399 [2024-11-27 08:09:42.263075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee0ea0 00:26:48.399 [2024-11-27 08:09:42.263848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.399 [2024-11-27 08:09:42.263867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:48.399 [2024-11-27 08:09:42.272618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee0ea0 00:26:48.399 [2024-11-27 08:09:42.273415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.399 [2024-11-27 08:09:42.273434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:48.399 [2024-11-27 08:09:42.282220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee0ea0 00:26:48.399 [2024-11-27 08:09:42.282999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.399 [2024-11-27 08:09:42.283018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:48.399 [2024-11-27 08:09:42.291818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee0ea0 00:26:48.399 [2024-11-27 08:09:42.292603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.399 [2024-11-27 08:09:42.292621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:48.399 [2024-11-27 08:09:42.301384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee0ea0 00:26:48.399 [2024-11-27 08:09:42.302205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.399 [2024-11-27 08:09:42.302223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:48.399 [2024-11-27 08:09:42.310963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee0ea0 00:26:48.400 [2024-11-27 08:09:42.311756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.311774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.320500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee0ea0 00:26:48.400 [2024-11-27 08:09:42.321209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.321227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.330068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee0ea0 00:26:48.400 [2024-11-27 08:09:42.330858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.330877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.340129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee6fa8 00:26:48.400 [2024-11-27 08:09:42.340690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.340709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.350233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef7100 00:26:48.400 [2024-11-27 08:09:42.351054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.351074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.359091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee4578 00:26:48.400 [2024-11-27 08:09:42.359989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.360007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.369233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef31b8 00:26:48.400 [2024-11-27 08:09:42.370239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.370260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.379959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee2c28 00:26:48.400 [2024-11-27 08:09:42.381104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.381124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.389597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee23b8 00:26:48.400 [2024-11-27 08:09:42.390755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.390773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.399171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efe2e8 00:26:48.400 [2024-11-27 08:09:42.400325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.400343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.408795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eea680 00:26:48.400 [2024-11-27 08:09:42.409849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.409867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.417584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efa7d8 00:26:48.400 [2024-11-27 08:09:42.418714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.418732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.427606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efb480 00:26:48.400 [2024-11-27 08:09:42.428859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.428878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.437653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efef90 00:26:48.400 [2024-11-27 08:09:42.438973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.438991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.446039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee6fa8 00:26:48.400 [2024-11-27 08:09:42.446839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.446857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.455652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016edf988 00:26:48.400 [2024-11-27 08:09:42.456535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.456553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.465227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016edf988 00:26:48.400 [2024-11-27 08:09:42.466106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.466125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.474790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016edf988 00:26:48.400 [2024-11-27 08:09:42.475713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.475732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.484388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016edf988 00:26:48.400 [2024-11-27 08:09:42.485237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.485255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.493962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef57b0 00:26:48.400 [2024-11-27 08:09:42.494737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.494755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:48.400 [2024-11-27 08:09:42.503668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef3e60 00:26:48.400 [2024-11-27 08:09:42.504469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.400 [2024-11-27 08:09:42.504489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:48.659 [2024-11-27 08:09:42.512831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee2c28 00:26:48.659 [2024-11-27 08:09:42.513609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.659 [2024-11-27 08:09:42.513629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:48.659 [2024-11-27 08:09:42.524512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee2c28 00:26:48.659 [2024-11-27 08:09:42.525791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.659 [2024-11-27 08:09:42.525810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:48.659 [2024-11-27 08:09:42.532888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef7da8 00:26:48.659 [2024-11-27 08:09:42.533771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.659 [2024-11-27 08:09:42.533790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:48.659 [2024-11-27 08:09:42.542464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef7da8 00:26:48.659 [2024-11-27 08:09:42.543321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.659 [2024-11-27 08:09:42.543340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:48.659 [2024-11-27 08:09:42.552047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef7da8 00:26:48.659 [2024-11-27 08:09:42.552893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.659 [2024-11-27 08:09:42.552911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:48.659 [2024-11-27 08:09:42.561650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef7da8 00:26:48.659 [2024-11-27 08:09:42.562407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.659 [2024-11-27 08:09:42.562426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:48.659 [2024-11-27 08:09:42.571210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef1ca0 00:26:48.659 [2024-11-27 08:09:42.572017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.659 [2024-11-27 08:09:42.572036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:48.659 [2024-11-27 08:09:42.580743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee6b70 00:26:48.659 [2024-11-27 08:09:42.581566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.659 [2024-11-27 08:09:42.581585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:48.659 [2024-11-27 08:09:42.590398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee6b70 00:26:48.659 [2024-11-27 08:09:42.591242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.659 [2024-11-27 08:09:42.591260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:48.659 [2024-11-27 08:09:42.600212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee6b70 00:26:48.659 [2024-11-27 08:09:42.601027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.659 [2024-11-27 08:09:42.601046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:48.659 [2024-11-27 08:09:42.609779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee6b70 00:26:48.659 [2024-11-27 08:09:42.610531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.659 [2024-11-27 08:09:42.610550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:48.659 [2024-11-27 08:09:42.618647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eef270 00:26:48.659 [2024-11-27 08:09:42.619386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.659 [2024-11-27 08:09:42.619408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:48.659 [2024-11-27 08:09:42.628831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eef270 00:26:48.659 [2024-11-27 08:09:42.629655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.659 [2024-11-27 08:09:42.629673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:48.659 [2024-11-27 08:09:42.638366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eef270 00:26:48.659 [2024-11-27 08:09:42.639222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.659 [2024-11-27 08:09:42.639240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:48.659 [2024-11-27 08:09:42.647975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eef270 00:26:48.659 [2024-11-27 08:09:42.648699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.659 [2024-11-27 08:09:42.648718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:48.659 [2024-11-27 08:09:42.657528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efc128 00:26:48.659 [2024-11-27 08:09:42.658248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.659 [2024-11-27 08:09:42.658267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:48.659 [2024-11-27 08:09:42.667164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eeaab8 00:26:48.659 [2024-11-27 08:09:42.667870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.660 [2024-11-27 08:09:42.667887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:48.660 [2024-11-27 08:09:42.676670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ede470 00:26:48.660 [2024-11-27 08:09:42.677379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.660 [2024-11-27 08:09:42.677398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.660 [2024-11-27 08:09:42.686135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ede470 00:26:48.660 [2024-11-27 08:09:42.686940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.660 [2024-11-27 08:09:42.686964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.660 [2024-11-27 08:09:42.695678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ede470 00:26:48.660 [2024-11-27 08:09:42.696495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.660 [2024-11-27 08:09:42.696514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.660 [2024-11-27 08:09:42.705280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ede470 00:26:48.660 [2024-11-27 08:09:42.706092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.660 [2024-11-27 08:09:42.706111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.660 [2024-11-27 08:09:42.714972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eea248 00:26:48.660 [2024-11-27 08:09:42.715743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.660 [2024-11-27 08:09:42.715761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:48.660 [2024-11-27 08:09:42.724376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eea248 00:26:48.660 [2024-11-27 08:09:42.725176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.660 [2024-11-27 08:09:42.725194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:48.660 [2024-11-27 08:09:42.733896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eea248 00:26:48.660 [2024-11-27 08:09:42.734688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.660 [2024-11-27 08:09:42.734706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:48.660 [2024-11-27 08:09:42.744651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eea248 00:26:48.660 [2024-11-27 08:09:42.745921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.660 [2024-11-27 08:09:42.745938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:48.660 [2024-11-27 08:09:42.754698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee27f0 00:26:48.660 [2024-11-27 08:09:42.756115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.660 [2024-11-27 08:09:42.756133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.660 [2024-11-27 08:09:42.764816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee38d0 00:26:48.660 [2024-11-27 08:09:42.766380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.660 [2024-11-27 08:09:42.766399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:48.919 [2024-11-27 08:09:42.773525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eeea00 00:26:48.919 [2024-11-27 08:09:42.774489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.919 [2024-11-27 08:09:42.774508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:48.919 [2024-11-27 08:09:42.783451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee4578 00:26:48.919 [2024-11-27 08:09:42.784600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.919 [2024-11-27 08:09:42.784619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:48.919 [2024-11-27 08:09:42.792560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eef270 00:26:48.919 [2024-11-27 08:09:42.793718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.919 [2024-11-27 08:09:42.793737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:48.919 [2024-11-27 08:09:42.802574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eed920 00:26:48.919 [2024-11-27 08:09:42.803843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.919 [2024-11-27 08:09:42.803861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:48.919 [2024-11-27 08:09:42.812607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eea248 00:26:48.919 [2024-11-27 08:09:42.814002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.919 [2024-11-27 08:09:42.814021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:48.919 [2024-11-27 08:09:42.822619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef0350 00:26:48.919 [2024-11-27 08:09:42.824151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.919 [2024-11-27 08:09:42.824169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:48.919 [2024-11-27 08:09:42.832634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee01f8 00:26:48.919 [2024-11-27 08:09:42.834269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.919 [2024-11-27 08:09:42.834287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:48.919 [2024-11-27 08:09:42.839420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ede8a8 00:26:48.919 [2024-11-27 08:09:42.840190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.919 [2024-11-27 08:09:42.840208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:48.919 [2024-11-27 08:09:42.850391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef5378 00:26:48.919 [2024-11-27 08:09:42.851704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.919 [2024-11-27 08:09:42.851722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:48.919 [2024-11-27 08:09:42.860721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eef270 00:26:48.919 [2024-11-27 08:09:42.862108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.919 [2024-11-27 08:09:42.862126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:48.919 [2024-11-27 08:09:42.870729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee3060 00:26:48.919 [2024-11-27 08:09:42.872259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.919 [2024-11-27 08:09:42.872281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:48.919 [2024-11-27 08:09:42.880750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eec840 00:26:48.919 [2024-11-27 08:09:42.882422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.919 [2024-11-27 08:09:42.882441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:48.919 [2024-11-27 08:09:42.887659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef3e60 00:26:48.919 [2024-11-27 08:09:42.888339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.919 [2024-11-27 08:09:42.888357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:48.919 [2024-11-27 08:09:42.897675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016edf988 00:26:48.919 [2024-11-27 08:09:42.898559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.919 [2024-11-27 08:09:42.898577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:48.919 [2024-11-27 08:09:42.907350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef0788 00:26:48.920 [2024-11-27 08:09:42.908154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.920 [2024-11-27 08:09:42.908172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:48.920 [2024-11-27 08:09:42.917159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016edece0 00:26:48.920 [2024-11-27 08:09:42.917805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.920 [2024-11-27 08:09:42.917823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:48.920 [2024-11-27 08:09:42.927158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee6300 00:26:48.920 [2024-11-27 08:09:42.927928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.920 [2024-11-27 08:09:42.927951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:48.920 [2024-11-27 08:09:42.936222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef3e60 00:26:48.920 [2024-11-27 08:09:42.937473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.920 [2024-11-27 08:09:42.937491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:48.920 [2024-11-27 08:09:42.944409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eed920 00:26:48.920 [2024-11-27 08:09:42.945164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.920 [2024-11-27 08:09:42.945182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:48.920 [2024-11-27 08:09:42.954413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef1ca0 00:26:48.920 [2024-11-27 08:09:42.955316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.920 [2024-11-27 08:09:42.955334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:48.920 [2024-11-27 08:09:42.964459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016edece0 00:26:48.920 [2024-11-27 08:09:42.965441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.920 [2024-11-27 08:09:42.965459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:48.920 [2024-11-27 08:09:42.974473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016edf988 00:26:48.920 [2024-11-27 08:09:42.975611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.920 [2024-11-27 08:09:42.975628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:48.920 [2024-11-27 08:09:42.984594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efd640 00:26:48.920 [2024-11-27 08:09:42.985889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.920 [2024-11-27 08:09:42.985907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:48.920 [2024-11-27 08:09:42.994916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ef1ca0 00:26:48.920 [2024-11-27 08:09:42.996337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.920 [2024-11-27 08:09:42.996356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:48.920 [2024-11-27 08:09:43.005002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee3d08 00:26:48.920 [2024-11-27 08:09:43.006542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.920 [2024-11-27 08:09:43.006561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:48.920 [2024-11-27 08:09:43.015104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016efa3a0 00:26:48.920 [2024-11-27 08:09:43.016748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.920 [2024-11-27 08:09:43.016766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:48.920 [2024-11-27 08:09:43.021875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eec408 00:26:48.920 [2024-11-27 08:09:43.022657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.920 [2024-11-27 08:09:43.022676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:49.178 [2024-11-27 08:09:43.032390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee27f0 00:26:49.178 [2024-11-27 08:09:43.033304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.178 [2024-11-27 08:09:43.033324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:49.178 [2024-11-27 08:09:43.042157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee4de8 00:26:49.178 [2024-11-27 08:09:43.042955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.178 [2024-11-27 08:09:43.042974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:49.178 [2024-11-27 08:09:43.051793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016eeaef0 00:26:49.178 [2024-11-27 08:09:43.052592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.178 [2024-11-27 08:09:43.052612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:49.178 [2024-11-27 08:09:43.061420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee3d08 00:26:49.178 [2024-11-27 08:09:43.062221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.178 [2024-11-27 08:09:43.062240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:49.178 [2024-11-27 08:09:43.071005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e180) with pdu=0x200016ee0a68 00:26:49.178 [2024-11-27 08:09:43.071907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.178 [2024-11-27 08:09:43.071926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:49.178 26486.00 IOPS, 103.46 MiB/s 00:26:49.178 Latency(us) 00:26:49.178 [2024-11-27T07:09:43.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.179 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:49.179 nvme0n1 : 2.00 26485.02 103.46 0.00 0.00 4826.24 2293.76 12195.39 00:26:49.179 [2024-11-27T07:09:43.288Z] =================================================================================================================== 00:26:49.179 [2024-11-27T07:09:43.288Z] Total : 26485.02 103.46 0.00 0.00 4826.24 2293.76 12195.39 00:26:49.179 { 00:26:49.179 "results": [ 00:26:49.179 { 00:26:49.179 "job": "nvme0n1", 00:26:49.179 "core_mask": "0x2", 00:26:49.179 "workload": "randwrite", 00:26:49.179 "status": "finished", 00:26:49.179 "queue_depth": 128, 00:26:49.179 "io_size": 4096, 00:26:49.179 "runtime": 2.004907, 00:26:49.179 "iops": 26485.0190058691, 00:26:49.179 "mibps": 103.45710549167617, 00:26:49.179 "io_failed": 0, 00:26:49.179 "io_timeout": 0, 00:26:49.179 "avg_latency_us": 4826.236073953983, 00:26:49.179 "min_latency_us": 2293.76, 00:26:49.179 "max_latency_us": 12195.394782608695 00:26:49.179 } 00:26:49.179 ], 00:26:49.179 "core_count": 1 00:26:49.179 } 00:26:49.179 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:49.179 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:49.179 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:49.179 | .driver_specific 00:26:49.179 | .nvme_error 00:26:49.179 | .status_code 00:26:49.179 | .command_transient_transport_error' 00:26:49.179 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 208 > 0 )) 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2596741 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2596741 ']' 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2596741 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2596741 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2596741' 00:26:49.437 killing process with pid 2596741 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2596741 00:26:49.437 Received shutdown signal, test time was about 2.000000 seconds 00:26:49.437 00:26:49.437 Latency(us) 00:26:49.437 [2024-11-27T07:09:43.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.437 [2024-11-27T07:09:43.546Z] =================================================================================================================== 00:26:49.437 [2024-11-27T07:09:43.546Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2596741 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2597218 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2597218 /var/tmp/bperf.sock 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2597218 ']' 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:49.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:49.437 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:49.437 [2024-11-27 08:09:43.541350] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:26:49.437 [2024-11-27 08:09:43.541398] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2597218 ] 00:26:49.437 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:49.437 Zero copy mechanism will not be used. 00:26:49.696 [2024-11-27 08:09:43.600302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.696 [2024-11-27 08:09:43.643938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.696 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:49.696 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:49.696 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:49.696 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:49.955 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:49.955 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.955 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:49.955 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.955 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:49.955 08:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.214 nvme0n1 00:26:50.214 08:09:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:50.214 08:09:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.214 08:09:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:50.214 08:09:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.214 08:09:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:50.214 08:09:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:50.472 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:50.472 Zero copy mechanism will not be used. 00:26:50.472 Running I/O for 2 seconds... 00:26:50.472 [2024-11-27 08:09:44.403329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.472 [2024-11-27 08:09:44.403429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.472 [2024-11-27 08:09:44.403456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.472 [2024-11-27 08:09:44.407984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.472 [2024-11-27 08:09:44.408057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.472 [2024-11-27 08:09:44.408080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.472 [2024-11-27 08:09:44.412425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.412506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.412527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.416889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.416996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.417020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.421293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.421379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.421399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.425645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.425759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.425777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.430022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.430140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.430159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.434379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.434455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.434473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.438781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.438852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.438870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.443102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.443168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.443186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.447533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.447596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.447615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.452693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.452772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.452790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.457074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.457151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.457169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.461583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.461643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.461661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.466001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.466065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.466084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.470358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.470458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.470476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.474696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.474769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.474786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.479700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.479770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.479788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.485174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.485238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.485256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.491159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.491219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.491237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.496905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.496991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.497009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.501931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.502017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.502036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.506959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.507035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.507053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.511913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.512007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.512025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.516560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.516650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.516668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.521135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.521230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.521248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.525700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.525788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.525806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.530267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.530358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.530377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.534837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.534966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.534987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.539363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.539438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.539468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.543829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.543906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.543924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.548366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.548440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.548460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.553039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.553124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.553143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.558209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.558283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.558303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.562918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.562985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.563003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.567492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.567587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.567605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.572658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.572758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.572776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.473 [2024-11-27 08:09:44.578901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.473 [2024-11-27 08:09:44.578972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.473 [2024-11-27 08:09:44.578992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.731 [2024-11-27 08:09:44.586039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.586195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.586217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.593705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.593766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.593785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.600810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.600906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.600925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.608144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.608242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.608261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.614415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.614536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.614556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.619481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.619591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.619609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.624079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.624139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.624157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.628709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.628783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.628801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.633353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.633450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.633469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.637964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.638070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.638089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.643564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.643685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.643705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.649013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.649175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.649194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.654564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.654682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.654700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.660019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.660119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.660139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.665785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.665904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.665924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.671506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.671615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.671633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.677314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.677411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.677430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.683199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.683308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.683330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.688678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.688754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.688773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.693418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.693493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.693511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.698047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.698110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.698128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.702736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.732 [2024-11-27 08:09:44.702802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.732 [2024-11-27 08:09:44.702820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.732 [2024-11-27 08:09:44.707579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.707654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.707672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.712250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.712344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.712363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.716905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.716981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.716999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.721790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.721858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.721876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.726474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.726554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.726573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.731385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.731452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.731470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.736962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.737066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.737083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.741832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.741913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.741931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.746530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.746624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.746642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.751456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.751534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.751553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.756105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.756181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.756199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.760736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.760852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.760871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.765329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.765399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.765417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.770886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.770978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.770996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.776367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.776429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.776447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.781550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.781619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.781637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.786246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.786315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.786334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.790751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.790824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.790843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.795217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.795305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.795323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.733 [2024-11-27 08:09:44.799698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.733 [2024-11-27 08:09:44.799763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.733 [2024-11-27 08:09:44.799782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.734 [2024-11-27 08:09:44.804195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.734 [2024-11-27 08:09:44.804274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.734 [2024-11-27 08:09:44.804292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.734 [2024-11-27 08:09:44.809111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.734 [2024-11-27 08:09:44.809186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.734 [2024-11-27 08:09:44.809219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.734 [2024-11-27 08:09:44.813943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.734 [2024-11-27 08:09:44.814042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.734 [2024-11-27 08:09:44.814060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.734 [2024-11-27 08:09:44.819619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.734 [2024-11-27 08:09:44.819779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.734 [2024-11-27 08:09:44.819798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.734 [2024-11-27 08:09:44.825839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.734 [2024-11-27 08:09:44.826091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.734 [2024-11-27 08:09:44.826110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.734 [2024-11-27 08:09:44.832111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.734 [2024-11-27 08:09:44.832422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.734 [2024-11-27 08:09:44.832441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.734 [2024-11-27 08:09:44.838403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.734 [2024-11-27 08:09:44.838699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.734 [2024-11-27 08:09:44.838720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.993 [2024-11-27 08:09:44.845801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.993 [2024-11-27 08:09:44.846144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.993 [2024-11-27 08:09:44.846166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.993 [2024-11-27 08:09:44.852791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.993 [2024-11-27 08:09:44.853145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.993 [2024-11-27 08:09:44.853165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.993 [2024-11-27 08:09:44.860407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.993 [2024-11-27 08:09:44.860771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.993 [2024-11-27 08:09:44.860791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.993 [2024-11-27 08:09:44.867104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.993 [2024-11-27 08:09:44.867352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.993 [2024-11-27 08:09:44.867372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.993 [2024-11-27 08:09:44.872532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.993 [2024-11-27 08:09:44.872802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.993 [2024-11-27 08:09:44.872821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.993 [2024-11-27 08:09:44.878074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.993 [2024-11-27 08:09:44.878332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.993 [2024-11-27 08:09:44.878351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.993 [2024-11-27 08:09:44.883745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.993 [2024-11-27 08:09:44.884083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.993 [2024-11-27 08:09:44.884102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.993 [2024-11-27 08:09:44.890839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.993 [2024-11-27 08:09:44.891397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.993 [2024-11-27 08:09:44.891417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.993 [2024-11-27 08:09:44.896663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.993 [2024-11-27 08:09:44.896920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.993 [2024-11-27 08:09:44.896939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.993 [2024-11-27 08:09:44.902174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.993 [2024-11-27 08:09:44.902447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.993 [2024-11-27 08:09:44.902467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.993 [2024-11-27 08:09:44.907177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.993 [2024-11-27 08:09:44.907429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.993 [2024-11-27 08:09:44.907448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.993 [2024-11-27 08:09:44.913037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.993 [2024-11-27 08:09:44.913315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.993 [2024-11-27 08:09:44.913334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.993 [2024-11-27 08:09:44.918789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.993 [2024-11-27 08:09:44.919054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.993 [2024-11-27 08:09:44.919073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:44.924378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:44.924617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:44.924635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:44.930063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:44.930299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:44.930318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:44.936146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:44.936384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:44.936403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:44.943418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:44.943782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:44.943802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:44.950340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:44.950632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:44.950652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:44.956427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:44.956664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:44.956683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:44.962463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:44.962727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:44.962746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:44.967259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:44.967509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:44.967533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:44.972796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:44.973096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:44.973114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:44.979183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:44.979490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:44.979510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:44.985655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:44.986001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:44.986019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:44.991269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:44.991504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:44.991523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:44.995741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:44.996004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:44.996023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:45.000186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:45.000434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:45.000454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:45.004578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:45.004838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:45.004857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:45.009326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:45.009562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:45.009581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:45.013844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:45.014093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:45.014112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:45.018460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:45.018706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:45.018725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:45.023102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:45.023364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:45.023383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:45.028898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:45.029159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:45.029178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:45.034101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:45.034362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:45.034381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:45.039003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:45.039249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:45.039268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:45.044288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:45.044556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:45.044575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:45.049453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:45.049678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:45.049697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:45.054585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:45.054870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:45.054888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:45.059809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.994 [2024-11-27 08:09:45.060070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.994 [2024-11-27 08:09:45.060090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.994 [2024-11-27 08:09:45.064490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.995 [2024-11-27 08:09:45.064745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.995 [2024-11-27 08:09:45.064764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.995 [2024-11-27 08:09:45.068972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.995 [2024-11-27 08:09:45.069246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.995 [2024-11-27 08:09:45.069265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.995 [2024-11-27 08:09:45.073834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.995 [2024-11-27 08:09:45.074081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.995 [2024-11-27 08:09:45.074099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.995 [2024-11-27 08:09:45.078303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.995 [2024-11-27 08:09:45.078545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.995 [2024-11-27 08:09:45.078565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.995 [2024-11-27 08:09:45.082691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.995 [2024-11-27 08:09:45.082937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.995 [2024-11-27 08:09:45.082962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.995 [2024-11-27 08:09:45.087091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.995 [2024-11-27 08:09:45.087358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.995 [2024-11-27 08:09:45.087377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.995 [2024-11-27 08:09:45.091460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.995 [2024-11-27 08:09:45.091723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.995 [2024-11-27 08:09:45.091743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.995 [2024-11-27 08:09:45.095826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.995 [2024-11-27 08:09:45.096096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.995 [2024-11-27 08:09:45.096119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.995 [2024-11-27 08:09:45.100352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:50.995 [2024-11-27 08:09:45.100627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.995 [2024-11-27 08:09:45.100649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.253 [2024-11-27 08:09:45.104749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.253 [2024-11-27 08:09:45.105032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.253 [2024-11-27 08:09:45.105052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.109172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.109440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.109460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.113535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.113780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.113800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.117871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.118113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.118132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.122210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.122451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.122470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.126542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.126792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.126811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.130881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.131145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.131165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.135238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.135514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.135534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.139642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.139902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.139922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.144373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.144623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.144642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.150067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.150298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.150317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.155559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.155816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.155835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.160435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.160703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.160724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.165164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.165412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.165432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.169827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.170086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.170107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.174471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.174718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.174737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.179165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.179405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.179425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.183820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.184072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.184091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.188830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.189106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.189125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.193734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.193988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.194007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.198045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.198297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.198316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.202362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.202626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.202645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.206685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.206930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.206956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.210915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.211185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.211205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.215223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.215493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.215516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.219472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.219734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.219754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.223771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.224039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.224059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.227971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.228257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.228276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.232219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.232470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.232489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.236640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.236882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.254 [2024-11-27 08:09:45.236901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.254 [2024-11-27 08:09:45.241194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.254 [2024-11-27 08:09:45.241434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.241453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.245496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.245750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.245769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.249761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.250031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.250051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.254073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.254336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.254355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.258348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.258611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.258630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.262634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.262897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.262917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.266867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.267131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.267151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.271162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.271430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.271449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.275415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.275681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.275700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.279629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.279878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.279897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.284415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.284666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.284685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.288892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.289152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.289172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.293586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.293823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.293842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.298886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.299159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.299178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.304502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.304743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.304762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.309628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.309885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.309905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.314405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.314666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.314685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.319023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.319289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.319309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.324371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.324614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.324632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.329108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.329350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.329369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.333706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.333955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.333979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.338190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.338430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.338450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.342506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.342748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.342767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.346976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.347226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.347245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.351309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.351548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.351567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.356088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.255 [2024-11-27 08:09:45.356367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.255 [2024-11-27 08:09:45.356386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.255 [2024-11-27 08:09:45.361593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.515 [2024-11-27 08:09:45.361882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.515 [2024-11-27 08:09:45.361908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.515 [2024-11-27 08:09:45.366954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.515 [2024-11-27 08:09:45.367207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.515 [2024-11-27 08:09:45.367227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.515 [2024-11-27 08:09:45.371860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.515 [2024-11-27 08:09:45.372099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.515 [2024-11-27 08:09:45.372119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.515 [2024-11-27 08:09:45.376498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.515 [2024-11-27 08:09:45.376741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.515 [2024-11-27 08:09:45.376761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.515 [2024-11-27 08:09:45.381136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.515 [2024-11-27 08:09:45.381378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.515 [2024-11-27 08:09:45.381398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.515 [2024-11-27 08:09:45.385781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.515 [2024-11-27 08:09:45.386068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.515 [2024-11-27 08:09:45.386087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.515 [2024-11-27 08:09:45.390514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.515 [2024-11-27 08:09:45.390764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.515 [2024-11-27 08:09:45.390783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.515 [2024-11-27 08:09:45.394946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.515 [2024-11-27 08:09:45.395216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.515 [2024-11-27 08:09:45.395235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.515 [2024-11-27 08:09:45.399470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.515 [2024-11-27 08:09:45.399727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.515 [2024-11-27 08:09:45.399747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.515 6191.00 IOPS, 773.88 MiB/s [2024-11-27T07:09:45.624Z] [2024-11-27 08:09:45.405721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.515 [2024-11-27 08:09:45.405995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.515 [2024-11-27 08:09:45.406013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.515 [2024-11-27 08:09:45.410154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.515 [2024-11-27 08:09:45.410403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.515 [2024-11-27 08:09:45.410422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.515 [2024-11-27 08:09:45.414811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.515 [2024-11-27 08:09:45.415053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.515 [2024-11-27 08:09:45.415072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.515 [2024-11-27 08:09:45.419754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.515 [2024-11-27 08:09:45.420011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.515 [2024-11-27 08:09:45.420031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.515 [2024-11-27 08:09:45.425226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.515 [2024-11-27 08:09:45.425470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.515 [2024-11-27 08:09:45.425490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.515 [2024-11-27 08:09:45.430259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.515 [2024-11-27 08:09:45.430503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.515 [2024-11-27 08:09:45.430522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.515 [2024-11-27 08:09:45.434989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.515 [2024-11-27 08:09:45.435225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.435244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.440433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.440692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.440711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.446210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.446443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.446462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.451008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.451271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.451290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.455593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.455839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.455858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.460179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.460424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.460448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.465097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.465338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.465357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.470147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.470407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.470426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.475104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.475365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.475384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.479936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.480216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.480235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.484695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.484962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.484981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.489736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.490014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.490033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.494417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.494656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.494674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.499139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.499395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.499414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.503774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.504045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.504064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.508665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.508908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.508927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.513435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.513700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.513720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.518229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.518484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.518503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.523327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.523575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.523594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.527989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.528228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.528247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.533034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.533308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.533328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.538259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.538501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.538520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.543076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.516 [2024-11-27 08:09:45.543314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.516 [2024-11-27 08:09:45.543333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.516 [2024-11-27 08:09:45.547326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.517 [2024-11-27 08:09:45.547571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.517 [2024-11-27 08:09:45.547590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.517 [2024-11-27 08:09:45.551611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.517 [2024-11-27 08:09:45.551855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.517 [2024-11-27 08:09:45.551874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.517 [2024-11-27 08:09:45.555996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.517 [2024-11-27 08:09:45.556252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.517 [2024-11-27 08:09:45.556271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.517 [2024-11-27 08:09:45.560258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.517 [2024-11-27 08:09:45.560507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.517 [2024-11-27 08:09:45.560526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.517 [2024-11-27 08:09:45.564539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.517 [2024-11-27 08:09:45.564791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.517 [2024-11-27 08:09:45.564810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.517 [2024-11-27 08:09:45.569358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.517 [2024-11-27 08:09:45.569625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.517 [2024-11-27 08:09:45.569645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.517 [2024-11-27 08:09:45.573747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.517 [2024-11-27 08:09:45.574018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.517 [2024-11-27 08:09:45.574038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.517 [2024-11-27 08:09:45.578373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.517 [2024-11-27 08:09:45.578621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.517 [2024-11-27 08:09:45.578641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.517 [2024-11-27 08:09:45.582997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.517 [2024-11-27 08:09:45.583266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.517 [2024-11-27 08:09:45.583292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.517 [2024-11-27 08:09:45.587511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.517 [2024-11-27 08:09:45.587749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.517 [2024-11-27 08:09:45.587768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.517 [2024-11-27 08:09:45.592211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.517 [2024-11-27 08:09:45.592460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.517 [2024-11-27 08:09:45.592479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.517 [2024-11-27 08:09:45.596846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.517 [2024-11-27 08:09:45.597121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.517 [2024-11-27 08:09:45.597141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.517 [2024-11-27 08:09:45.601455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.517 [2024-11-27 08:09:45.601727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.517 [2024-11-27 08:09:45.601746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.517 [2024-11-27 08:09:45.606287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.517 [2024-11-27 08:09:45.606544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.517 [2024-11-27 08:09:45.606565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.517 [2024-11-27 08:09:45.611206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.517 [2024-11-27 08:09:45.611453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.517 [2024-11-27 08:09:45.611473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.517 [2024-11-27 08:09:45.615900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.517 [2024-11-27 08:09:45.616162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.517 [2024-11-27 08:09:45.616182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.517 [2024-11-27 08:09:45.620742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.517 [2024-11-27 08:09:45.620990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.517 [2024-11-27 08:09:45.621011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.625455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.625704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.625725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.630173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.630433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.630453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.635248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.635498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.635519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.639901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.640154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.640173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.644638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.644888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.644908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.649330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.649597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.649618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.653970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.654237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.654256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.658707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.658971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.658990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.663284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.663531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.663549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.667877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.668138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.668158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.672521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.672791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.672811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.677262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.677513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.677534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.681868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.682142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.682163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.687493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.687733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.687753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.693071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.693298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.693318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.697768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.698044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.698064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.702429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.702675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.702694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.707020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.707269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.707292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.711725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.711976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.711996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.716366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.716613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.716633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.721007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.721267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.777 [2024-11-27 08:09:45.721286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.777 [2024-11-27 08:09:45.725800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.777 [2024-11-27 08:09:45.726062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.726082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.730438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.730688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.730707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.735383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.735654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.735673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.740256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.740513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.740532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.745069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.745311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.745330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.749657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.749923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.749943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.754432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.754680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.754699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.759303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.759575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.759595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.764004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.764249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.764268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.768598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.768840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.768859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.772984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.773219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.773238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.777307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.777547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.777566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.781676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.781915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.781934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.786062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.786306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.786326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.790856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.791126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.791146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.795304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.795550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.795569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.799625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.799898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.799917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.803997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.804287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.804306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.808300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.808544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.808563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.812573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.812816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.812836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.816836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.817099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.817118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.821168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.821431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.821450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.825503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.825768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.825791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.829800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.830080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.830099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.834124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.834396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.834415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.838675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.838915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.778 [2024-11-27 08:09:45.838935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.778 [2024-11-27 08:09:45.843471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.778 [2024-11-27 08:09:45.843711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.779 [2024-11-27 08:09:45.843730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.779 [2024-11-27 08:09:45.848113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.779 [2024-11-27 08:09:45.848356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.779 [2024-11-27 08:09:45.848375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.779 [2024-11-27 08:09:45.852472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.779 [2024-11-27 08:09:45.852717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.779 [2024-11-27 08:09:45.852736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.779 [2024-11-27 08:09:45.856803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.779 [2024-11-27 08:09:45.857074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.779 [2024-11-27 08:09:45.857093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.779 [2024-11-27 08:09:45.861153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.779 [2024-11-27 08:09:45.861394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.779 [2024-11-27 08:09:45.861413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.779 [2024-11-27 08:09:45.866276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.779 [2024-11-27 08:09:45.866633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.779 [2024-11-27 08:09:45.866653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.779 [2024-11-27 08:09:45.872539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.779 [2024-11-27 08:09:45.872838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.779 [2024-11-27 08:09:45.872857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.779 [2024-11-27 08:09:45.877870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.779 [2024-11-27 08:09:45.878174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.779 [2024-11-27 08:09:45.878194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.779 [2024-11-27 08:09:45.883094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:51.779 [2024-11-27 08:09:45.883352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.779 [2024-11-27 08:09:45.883374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.039 [2024-11-27 08:09:45.887978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.039 [2024-11-27 08:09:45.888207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.039 [2024-11-27 08:09:45.888228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.039 [2024-11-27 08:09:45.894024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.039 [2024-11-27 08:09:45.894319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.039 [2024-11-27 08:09:45.894340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.039 [2024-11-27 08:09:45.899655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.039 [2024-11-27 08:09:45.899913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.039 [2024-11-27 08:09:45.899932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.039 [2024-11-27 08:09:45.904900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.039 [2024-11-27 08:09:45.905206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.039 [2024-11-27 08:09:45.905226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.039 [2024-11-27 08:09:45.910427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.039 [2024-11-27 08:09:45.910699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.039 [2024-11-27 08:09:45.910718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.039 [2024-11-27 08:09:45.915249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.039 [2024-11-27 08:09:45.915486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.039 [2024-11-27 08:09:45.915505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.039 [2024-11-27 08:09:45.920582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.039 [2024-11-27 08:09:45.920901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.039 [2024-11-27 08:09:45.920922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.039 [2024-11-27 08:09:45.926768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.039 [2024-11-27 08:09:45.927026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.039 [2024-11-27 08:09:45.927047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.039 [2024-11-27 08:09:45.932451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.039 [2024-11-27 08:09:45.932740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.039 [2024-11-27 08:09:45.932760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.039 [2024-11-27 08:09:45.938744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.039 [2024-11-27 08:09:45.939107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.039 [2024-11-27 08:09:45.939127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.039 [2024-11-27 08:09:45.945588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.039 [2024-11-27 08:09:45.945852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.039 [2024-11-27 08:09:45.945871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.039 [2024-11-27 08:09:45.950749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.039 [2024-11-27 08:09:45.951019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.039 [2024-11-27 08:09:45.951038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.039 [2024-11-27 08:09:45.955161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.039 [2024-11-27 08:09:45.955408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.039 [2024-11-27 08:09:45.955427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.039 [2024-11-27 08:09:45.959511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.039 [2024-11-27 08:09:45.959781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.039 [2024-11-27 08:09:45.959805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.039 [2024-11-27 08:09:45.963921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.039 [2024-11-27 08:09:45.964170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:45.964189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:45.968249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:45.968492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:45.968511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:45.972619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:45.972858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:45.972877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:45.976979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:45.977221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:45.977240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:45.981314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:45.981550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:45.981570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:45.985653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:45.985931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:45.985957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:45.990341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:45.990600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:45.990620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:45.994743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:45.995020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:45.995039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:45.999080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:45.999374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:45.999393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.003493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.003738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:46.003757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.007817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.008090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:46.008109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.012156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.012424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:46.012443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.016476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.016744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:46.016764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.020822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.021102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:46.021121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.025709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.025964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:46.025983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.030353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.030608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:46.030628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.034846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.035129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:46.035149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.039574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.039820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:46.039839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.045143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.045390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:46.045409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.050361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.050613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:46.050632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.055118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.055359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:46.055378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.059861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.060118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:46.060137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.064482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.064750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:46.064769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.069157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.069405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:46.069424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.073871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.074134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:46.074153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.079273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.079511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:46.079534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.085170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.085404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.040 [2024-11-27 08:09:46.085423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.040 [2024-11-27 08:09:46.090082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.040 [2024-11-27 08:09:46.090327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.041 [2024-11-27 08:09:46.090346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.041 [2024-11-27 08:09:46.094820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.041 [2024-11-27 08:09:46.095086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.041 [2024-11-27 08:09:46.095107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.041 [2024-11-27 08:09:46.099601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.041 [2024-11-27 08:09:46.099842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.041 [2024-11-27 08:09:46.099861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.041 [2024-11-27 08:09:46.104174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.041 [2024-11-27 08:09:46.104418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.041 [2024-11-27 08:09:46.104438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.041 [2024-11-27 08:09:46.108928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.041 [2024-11-27 08:09:46.109173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.041 [2024-11-27 08:09:46.109192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.041 [2024-11-27 08:09:46.113650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.041 [2024-11-27 08:09:46.113906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.041 [2024-11-27 08:09:46.113925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.041 [2024-11-27 08:09:46.118886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.041 [2024-11-27 08:09:46.119132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.041 [2024-11-27 08:09:46.119151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.041 [2024-11-27 08:09:46.123892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.041 [2024-11-27 08:09:46.124142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.041 [2024-11-27 08:09:46.124161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.041 [2024-11-27 08:09:46.128638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.041 [2024-11-27 08:09:46.128872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.041 [2024-11-27 08:09:46.128891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.041 [2024-11-27 08:09:46.133404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.041 [2024-11-27 08:09:46.133665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.041 [2024-11-27 08:09:46.133684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.041 [2024-11-27 08:09:46.138159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.041 [2024-11-27 08:09:46.138403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.041 [2024-11-27 08:09:46.138421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.041 [2024-11-27 08:09:46.142881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.041 [2024-11-27 08:09:46.143133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.041 [2024-11-27 08:09:46.143154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.300 [2024-11-27 08:09:46.147481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.300 [2024-11-27 08:09:46.147734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.300 [2024-11-27 08:09:46.147755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.300 [2024-11-27 08:09:46.151977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.300 [2024-11-27 08:09:46.152221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.152241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.156390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.156638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.156658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.160855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.161122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.161141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.165336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.165580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.165599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.170271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.170535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.170554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.174761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.175033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.175052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.179289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.179537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.179556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.183784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.184046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.184065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.188243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.188482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.188501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.192721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.192971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.192991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.197176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.197423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.197442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.201654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.201895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.201918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.206105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.206366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.206385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.210512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.210780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.210799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.214923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.215197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.215217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.219341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.219603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.219623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.223849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.224113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.224132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.228324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.228566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.228585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.232750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.232996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.233015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.237149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.237392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.237411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.241542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.241794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.241812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.245999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.246247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.246266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.250447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.250692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.250711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.254868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.255131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.255150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.259313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.259576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.259596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.301 [2024-11-27 08:09:46.263750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.301 [2024-11-27 08:09:46.264016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.301 [2024-11-27 08:09:46.264035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.268146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.268418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.268437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.272622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.272887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.272906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.277157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.277396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.277415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.281852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.282116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.282135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.287086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.287363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.287382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.292877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.293127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.293147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.298066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.298313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.298332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.302848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.303113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.303132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.307686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.307945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.307970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.312293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.312533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.312552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.316840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.317094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.317113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.321613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.321855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.321877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.326885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.327149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.327170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.331979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.332232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.332251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.337360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.337594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.337613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.343147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.343396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.343415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.348604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.348847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.348868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.353488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.353739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.353758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.357994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.358236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.358254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.362549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.362788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.362807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.367871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.368144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.368164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.372344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.372605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.372624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.376641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.376884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.376903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.380981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.381232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.381251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.385346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.385591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.385610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.389772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.390039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.390058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.302 [2024-11-27 08:09:46.394177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.302 [2024-11-27 08:09:46.394423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.302 [2024-11-27 08:09:46.394443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.303 [2024-11-27 08:09:46.398545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.303 [2024-11-27 08:09:46.398792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.303 [2024-11-27 08:09:46.398811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.303 [2024-11-27 08:09:46.402879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.303 [2024-11-27 08:09:46.403155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.303 [2024-11-27 08:09:46.403174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.303 6365.50 IOPS, 795.69 MiB/s [2024-11-27T07:09:46.412Z] [2024-11-27 08:09:46.408369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145e660) with pdu=0x200016eff3c8 00:26:52.560 [2024-11-27 08:09:46.408574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.560 [2024-11-27 08:09:46.408596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.560 00:26:52.560 Latency(us) 00:26:52.560 [2024-11-27T07:09:46.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.560 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:52.560 nvme0n1 : 2.00 6363.66 795.46 0.00 0.00 2509.90 1759.50 7750.34 00:26:52.560 [2024-11-27T07:09:46.669Z] =================================================================================================================== 00:26:52.560 [2024-11-27T07:09:46.669Z] Total : 6363.66 795.46 0.00 0.00 2509.90 1759.50 7750.34 00:26:52.560 { 00:26:52.560 "results": [ 00:26:52.560 { 00:26:52.560 "job": "nvme0n1", 00:26:52.560 "core_mask": "0x2", 00:26:52.560 "workload": "randwrite", 00:26:52.560 "status": "finished", 00:26:52.560 "queue_depth": 16, 00:26:52.560 "io_size": 131072, 00:26:52.560 "runtime": 2.003878, 00:26:52.560 "iops": 6363.660861589378, 00:26:52.560 "mibps": 795.4576076986723, 00:26:52.560 "io_failed": 0, 00:26:52.560 "io_timeout": 0, 00:26:52.560 "avg_latency_us": 2509.895447056898, 00:26:52.560 "min_latency_us": 1759.4991304347825, 00:26:52.560 "max_latency_us": 7750.344347826087 00:26:52.560 } 00:26:52.560 ], 00:26:52.560 "core_count": 1 00:26:52.560 } 00:26:52.560 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:52.560 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:52.560 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:52.560 | .driver_specific 00:26:52.560 | .nvme_error 00:26:52.560 | .status_code 00:26:52.561 | .command_transient_transport_error' 00:26:52.561 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:52.561 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 412 > 0 )) 00:26:52.561 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2597218 00:26:52.561 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2597218 ']' 00:26:52.561 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2597218 00:26:52.561 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:52.561 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:52.561 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2597218 00:26:52.818 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:52.819 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:52.819 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2597218' 00:26:52.819 killing process with pid 2597218 00:26:52.819 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2597218 00:26:52.819 Received shutdown signal, test time was about 2.000000 seconds 00:26:52.819 00:26:52.819 Latency(us) 00:26:52.819 [2024-11-27T07:09:46.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.819 [2024-11-27T07:09:46.928Z] =================================================================================================================== 00:26:52.819 [2024-11-27T07:09:46.928Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:52.819 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2597218 00:26:52.819 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2595555 00:26:52.819 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2595555 ']' 00:26:52.819 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2595555 00:26:52.819 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:52.819 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:52.819 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2595555 00:26:52.819 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:52.819 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:52.819 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2595555' 00:26:52.819 killing process with pid 2595555 00:26:52.819 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2595555 00:26:52.819 08:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2595555 00:26:53.077 00:26:53.077 real 0m13.869s 00:26:53.077 user 0m26.616s 00:26:53.077 sys 0m4.295s 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.077 ************************************ 00:26:53.077 END TEST nvmf_digest_error 00:26:53.077 ************************************ 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:53.077 rmmod nvme_tcp 00:26:53.077 rmmod nvme_fabrics 00:26:53.077 rmmod nvme_keyring 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2595555 ']' 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2595555 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2595555 ']' 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2595555 00:26:53.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2595555) - No such process 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2595555 is not found' 00:26:53.077 Process with pid 2595555 is not found 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.077 08:09:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:55.610 00:26:55.610 real 0m35.952s 00:26:55.610 user 0m55.222s 00:26:55.610 sys 0m13.025s 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:55.610 ************************************ 00:26:55.610 END TEST nvmf_digest 00:26:55.610 ************************************ 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.610 ************************************ 00:26:55.610 START TEST nvmf_bdevperf 00:26:55.610 ************************************ 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:55.610 * Looking for test storage... 00:26:55.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:55.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.610 --rc genhtml_branch_coverage=1 00:26:55.610 --rc genhtml_function_coverage=1 00:26:55.610 --rc genhtml_legend=1 00:26:55.610 --rc geninfo_all_blocks=1 00:26:55.610 --rc geninfo_unexecuted_blocks=1 00:26:55.610 00:26:55.610 ' 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:55.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.610 --rc genhtml_branch_coverage=1 00:26:55.610 --rc genhtml_function_coverage=1 00:26:55.610 --rc genhtml_legend=1 00:26:55.610 --rc geninfo_all_blocks=1 00:26:55.610 --rc geninfo_unexecuted_blocks=1 00:26:55.610 00:26:55.610 ' 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:55.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.610 --rc genhtml_branch_coverage=1 00:26:55.610 --rc genhtml_function_coverage=1 00:26:55.610 --rc genhtml_legend=1 00:26:55.610 --rc geninfo_all_blocks=1 00:26:55.610 --rc geninfo_unexecuted_blocks=1 00:26:55.610 00:26:55.610 ' 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:55.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.610 --rc genhtml_branch_coverage=1 00:26:55.610 --rc genhtml_function_coverage=1 00:26:55.610 --rc genhtml_legend=1 00:26:55.610 --rc geninfo_all_blocks=1 00:26:55.610 --rc geninfo_unexecuted_blocks=1 00:26:55.610 00:26:55.610 ' 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.610 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:55.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:55.611 08:09:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.877 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:00.877 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:00.877 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:00.877 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:00.877 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:00.877 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:00.877 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:00.878 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:00.878 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:00.878 Found net devices under 0000:86:00.0: cvl_0_0 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:00.878 Found net devices under 0000:86:00.1: cvl_0_1 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:00.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:00.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:27:00.878 00:27:00.878 --- 10.0.0.2 ping statistics --- 00:27:00.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.878 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:00.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:00.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:27:00.878 00:27:00.878 --- 10.0.0.1 ping statistics --- 00:27:00.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.878 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:00.878 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2601223 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2601223 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2601223 ']' 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.879 [2024-11-27 08:09:54.502333] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:27:00.879 [2024-11-27 08:09:54.502377] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.879 [2024-11-27 08:09:54.566550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:00.879 [2024-11-27 08:09:54.609159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:00.879 [2024-11-27 08:09:54.609197] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:00.879 [2024-11-27 08:09:54.609204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:00.879 [2024-11-27 08:09:54.609210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:00.879 [2024-11-27 08:09:54.609215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:00.879 [2024-11-27 08:09:54.610674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:00.879 [2024-11-27 08:09:54.610779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:00.879 [2024-11-27 08:09:54.610781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.879 [2024-11-27 08:09:54.752971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.879 Malloc0 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.879 [2024-11-27 08:09:54.806684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:00.879 { 00:27:00.879 "params": { 00:27:00.879 "name": "Nvme$subsystem", 00:27:00.879 "trtype": "$TEST_TRANSPORT", 00:27:00.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.879 "adrfam": "ipv4", 00:27:00.879 "trsvcid": "$NVMF_PORT", 00:27:00.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.879 "hdgst": ${hdgst:-false}, 00:27:00.879 "ddgst": ${ddgst:-false} 00:27:00.879 }, 00:27:00.879 "method": "bdev_nvme_attach_controller" 00:27:00.879 } 00:27:00.879 EOF 00:27:00.879 )") 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:00.879 08:09:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:00.879 "params": { 00:27:00.879 "name": "Nvme1", 00:27:00.879 "trtype": "tcp", 00:27:00.879 "traddr": "10.0.0.2", 00:27:00.879 "adrfam": "ipv4", 00:27:00.879 "trsvcid": "4420", 00:27:00.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:00.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:00.879 "hdgst": false, 00:27:00.879 "ddgst": false 00:27:00.879 }, 00:27:00.879 "method": "bdev_nvme_attach_controller" 00:27:00.879 }' 00:27:00.879 [2024-11-27 08:09:54.858234] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:27:00.879 [2024-11-27 08:09:54.858277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2601250 ] 00:27:00.879 [2024-11-27 08:09:54.920813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.879 [2024-11-27 08:09:54.963058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.447 Running I/O for 1 seconds... 00:27:02.450 10785.00 IOPS, 42.13 MiB/s 00:27:02.450 Latency(us) 00:27:02.450 [2024-11-27T07:09:56.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.450 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:02.450 Verification LBA range: start 0x0 length 0x4000 00:27:02.450 Nvme1n1 : 1.01 10804.03 42.20 0.00 0.00 11802.23 2251.02 13506.11 00:27:02.450 [2024-11-27T07:09:56.559Z] =================================================================================================================== 00:27:02.450 [2024-11-27T07:09:56.559Z] Total : 10804.03 42.20 0.00 0.00 11802.23 2251.02 13506.11 00:27:02.450 08:09:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2601484 00:27:02.450 08:09:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:02.450 08:09:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:02.450 08:09:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:02.450 08:09:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:02.450 08:09:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:02.450 08:09:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:02.450 08:09:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:02.450 { 00:27:02.450 "params": { 00:27:02.450 "name": "Nvme$subsystem", 00:27:02.450 "trtype": "$TEST_TRANSPORT", 00:27:02.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.450 "adrfam": "ipv4", 00:27:02.450 "trsvcid": "$NVMF_PORT", 00:27:02.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.450 "hdgst": ${hdgst:-false}, 00:27:02.450 "ddgst": ${ddgst:-false} 00:27:02.450 }, 00:27:02.450 "method": "bdev_nvme_attach_controller" 00:27:02.450 } 00:27:02.450 EOF 00:27:02.450 )") 00:27:02.450 08:09:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:02.450 08:09:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:02.450 08:09:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:02.450 08:09:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:02.450 "params": { 00:27:02.450 "name": "Nvme1", 00:27:02.450 "trtype": "tcp", 00:27:02.450 "traddr": "10.0.0.2", 00:27:02.450 "adrfam": "ipv4", 00:27:02.450 "trsvcid": "4420", 00:27:02.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:02.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:02.450 "hdgst": false, 00:27:02.450 "ddgst": false 00:27:02.450 }, 00:27:02.450 "method": "bdev_nvme_attach_controller" 00:27:02.450 }' 00:27:02.450 [2024-11-27 08:09:56.504154] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:27:02.451 [2024-11-27 08:09:56.504203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2601484 ] 00:27:02.709 [2024-11-27 08:09:56.566763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.709 [2024-11-27 08:09:56.606353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.709 Running I/O for 15 seconds... 00:27:05.019 10809.00 IOPS, 42.22 MiB/s [2024-11-27T07:09:59.696Z] 10821.50 IOPS, 42.27 MiB/s [2024-11-27T07:09:59.696Z] 08:09:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2601223 00:27:05.587 08:09:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:05.587 [2024-11-27 08:09:59.472194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.587 [2024-11-27 08:09:59.472705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-11-27 08:09:59.472713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.472720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.472728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.472734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.472742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.472749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.472757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.472763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.472772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.472778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.472787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.472793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.472802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.472808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.472817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.472823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.472833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.472842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.472851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.472858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.472867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.472874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.472883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.472890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.472898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.472905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.472913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.472920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.472928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.472935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.472943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.473072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.473087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.473103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.473118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.473133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.473149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.473165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.473179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.473196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.473212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.473227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.473242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.473259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.473275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-11-27 08:09:59.473290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-11-27 08:09:59.473839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-11-27 08:09:59.473847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.473855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.473864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.473870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.473879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.473885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.473893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.473901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.473909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.473916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.473925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.473931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.473939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.473946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.473959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.473966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.473974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.473985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.473994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-11-27 08:09:59.474278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.589 [2024-11-27 08:09:59.474293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.589 [2024-11-27 08:09:59.474309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.589 [2024-11-27 08:09:59.474324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.589 [2024-11-27 08:09:59.474340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.589 [2024-11-27 08:09:59.474355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.589 [2024-11-27 08:09:59.474372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.474380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8296c0 is same with the state(6) to be set 00:27:05.589 [2024-11-27 08:09:59.474389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-11-27 08:09:59.474394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-11-27 08:09:59.474402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100032 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-11-27 08:09:59.474409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-11-27 08:09:59.477313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.589 [2024-11-27 08:09:59.477371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.589 [2024-11-27 08:09:59.477984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-27 08:09:59.478001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.589 [2024-11-27 08:09:59.478010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.589 [2024-11-27 08:09:59.478191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.590 [2024-11-27 08:09:59.478371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.590 [2024-11-27 08:09:59.478380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.590 [2024-11-27 08:09:59.478388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.590 [2024-11-27 08:09:59.478397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.590 [2024-11-27 08:09:59.490526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.590 [2024-11-27 08:09:59.490913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-27 08:09:59.490931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.590 [2024-11-27 08:09:59.490939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.590 [2024-11-27 08:09:59.491118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.590 [2024-11-27 08:09:59.491293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.590 [2024-11-27 08:09:59.491301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.590 [2024-11-27 08:09:59.491308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.590 [2024-11-27 08:09:59.491314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.590 [2024-11-27 08:09:59.503566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.590 [2024-11-27 08:09:59.503891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-27 08:09:59.503908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.590 [2024-11-27 08:09:59.503915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.590 [2024-11-27 08:09:59.504095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.590 [2024-11-27 08:09:59.504273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.590 [2024-11-27 08:09:59.504282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.590 [2024-11-27 08:09:59.504288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.590 [2024-11-27 08:09:59.504294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.590 [2024-11-27 08:09:59.516565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.590 [2024-11-27 08:09:59.516979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-27 08:09:59.516997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.590 [2024-11-27 08:09:59.517005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.590 [2024-11-27 08:09:59.517179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.590 [2024-11-27 08:09:59.517352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.590 [2024-11-27 08:09:59.517362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.590 [2024-11-27 08:09:59.517368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.590 [2024-11-27 08:09:59.517375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.590 [2024-11-27 08:09:59.529522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.590 [2024-11-27 08:09:59.529969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-27 08:09:59.529986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.590 [2024-11-27 08:09:59.529993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.590 [2024-11-27 08:09:59.530173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.590 [2024-11-27 08:09:59.530351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.590 [2024-11-27 08:09:59.530360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.590 [2024-11-27 08:09:59.530366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.590 [2024-11-27 08:09:59.530373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.590 [2024-11-27 08:09:59.542368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.590 [2024-11-27 08:09:59.542835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-27 08:09:59.542852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.590 [2024-11-27 08:09:59.542860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.590 [2024-11-27 08:09:59.543041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.590 [2024-11-27 08:09:59.543221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.590 [2024-11-27 08:09:59.543229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.590 [2024-11-27 08:09:59.543239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.590 [2024-11-27 08:09:59.543246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.590 [2024-11-27 08:09:59.555244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.590 [2024-11-27 08:09:59.555647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-27 08:09:59.555691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.590 [2024-11-27 08:09:59.555713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.590 [2024-11-27 08:09:59.556175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.590 [2024-11-27 08:09:59.556349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.590 [2024-11-27 08:09:59.556357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.590 [2024-11-27 08:09:59.556363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.590 [2024-11-27 08:09:59.556369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.590 [2024-11-27 08:09:59.568192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.590 [2024-11-27 08:09:59.568638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-27 08:09:59.568654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.590 [2024-11-27 08:09:59.568661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.590 [2024-11-27 08:09:59.568834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.590 [2024-11-27 08:09:59.569014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.590 [2024-11-27 08:09:59.569024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.590 [2024-11-27 08:09:59.569030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.590 [2024-11-27 08:09:59.569036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.590 [2024-11-27 08:09:59.581190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.590 [2024-11-27 08:09:59.581641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-27 08:09:59.581658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.590 [2024-11-27 08:09:59.581665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.590 [2024-11-27 08:09:59.581837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.590 [2024-11-27 08:09:59.582016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.590 [2024-11-27 08:09:59.582025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.590 [2024-11-27 08:09:59.582031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.590 [2024-11-27 08:09:59.582037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.590 [2024-11-27 08:09:59.594099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.590 [2024-11-27 08:09:59.594462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-27 08:09:59.594479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.590 [2024-11-27 08:09:59.594486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.590 [2024-11-27 08:09:59.594658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.590 [2024-11-27 08:09:59.594831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.590 [2024-11-27 08:09:59.594839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.590 [2024-11-27 08:09:59.594846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.590 [2024-11-27 08:09:59.594852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.590 [2024-11-27 08:09:59.607107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.590 [2024-11-27 08:09:59.607454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-27 08:09:59.607471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.591 [2024-11-27 08:09:59.607478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.591 [2024-11-27 08:09:59.607651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.591 [2024-11-27 08:09:59.607823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.591 [2024-11-27 08:09:59.607831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.591 [2024-11-27 08:09:59.607837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.591 [2024-11-27 08:09:59.607843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.591 [2024-11-27 08:09:59.620137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.591 [2024-11-27 08:09:59.620436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-27 08:09:59.620453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.591 [2024-11-27 08:09:59.620460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.591 [2024-11-27 08:09:59.620631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.591 [2024-11-27 08:09:59.620808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.591 [2024-11-27 08:09:59.620817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.591 [2024-11-27 08:09:59.620823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.591 [2024-11-27 08:09:59.620829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.591 [2024-11-27 08:09:59.633113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.591 [2024-11-27 08:09:59.633543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-27 08:09:59.633579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.591 [2024-11-27 08:09:59.633611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.591 [2024-11-27 08:09:59.634205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.591 [2024-11-27 08:09:59.634380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.591 [2024-11-27 08:09:59.634388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.591 [2024-11-27 08:09:59.634394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.591 [2024-11-27 08:09:59.634400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.591 [2024-11-27 08:09:59.646035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.591 [2024-11-27 08:09:59.646444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-27 08:09:59.646461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.591 [2024-11-27 08:09:59.646468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.591 [2024-11-27 08:09:59.646646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.591 [2024-11-27 08:09:59.646823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.591 [2024-11-27 08:09:59.646831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.591 [2024-11-27 08:09:59.646838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.591 [2024-11-27 08:09:59.646844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.591 [2024-11-27 08:09:59.658877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.591 [2024-11-27 08:09:59.659265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-27 08:09:59.659282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.591 [2024-11-27 08:09:59.659289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.591 [2024-11-27 08:09:59.659463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.591 [2024-11-27 08:09:59.659635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.591 [2024-11-27 08:09:59.659644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.591 [2024-11-27 08:09:59.659650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.591 [2024-11-27 08:09:59.659656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.591 [2024-11-27 08:09:59.671889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.591 [2024-11-27 08:09:59.672194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-27 08:09:59.672211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.591 [2024-11-27 08:09:59.672218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.591 [2024-11-27 08:09:59.672391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.591 [2024-11-27 08:09:59.672567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.591 [2024-11-27 08:09:59.672576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.591 [2024-11-27 08:09:59.672582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.591 [2024-11-27 08:09:59.672588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.591 [2024-11-27 08:09:59.684865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.591 [2024-11-27 08:09:59.685239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-27 08:09:59.685283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.591 [2024-11-27 08:09:59.685305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.591 [2024-11-27 08:09:59.685887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.591 [2024-11-27 08:09:59.686444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.591 [2024-11-27 08:09:59.686453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.591 [2024-11-27 08:09:59.686459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.591 [2024-11-27 08:09:59.686465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.851 [2024-11-27 08:09:59.697959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.851 [2024-11-27 08:09:59.698349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.851 [2024-11-27 08:09:59.698366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.851 [2024-11-27 08:09:59.698374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.851 [2024-11-27 08:09:59.698547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.851 [2024-11-27 08:09:59.698721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.851 [2024-11-27 08:09:59.698729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.851 [2024-11-27 08:09:59.698735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.851 [2024-11-27 08:09:59.698741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.851 [2024-11-27 08:09:59.711052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.851 [2024-11-27 08:09:59.711396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.851 [2024-11-27 08:09:59.711412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.851 [2024-11-27 08:09:59.711419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.851 [2024-11-27 08:09:59.711591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.851 [2024-11-27 08:09:59.711765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.851 [2024-11-27 08:09:59.711774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.851 [2024-11-27 08:09:59.711784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.851 [2024-11-27 08:09:59.711791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.851 [2024-11-27 08:09:59.724161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.851 [2024-11-27 08:09:59.724554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.851 [2024-11-27 08:09:59.724569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.851 [2024-11-27 08:09:59.724577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.851 [2024-11-27 08:09:59.724749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.851 [2024-11-27 08:09:59.724922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.851 [2024-11-27 08:09:59.724929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.851 [2024-11-27 08:09:59.724935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.851 [2024-11-27 08:09:59.724941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.851 [2024-11-27 08:09:59.737255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.852 [2024-11-27 08:09:59.737609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.852 [2024-11-27 08:09:59.737626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.852 [2024-11-27 08:09:59.737633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.852 [2024-11-27 08:09:59.737812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.852 [2024-11-27 08:09:59.737997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.852 [2024-11-27 08:09:59.738007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.852 [2024-11-27 08:09:59.738016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.852 [2024-11-27 08:09:59.738022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.852 [2024-11-27 08:09:59.750419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.852 [2024-11-27 08:09:59.750727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.852 [2024-11-27 08:09:59.750745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.852 [2024-11-27 08:09:59.750752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.852 [2024-11-27 08:09:59.750929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.852 [2024-11-27 08:09:59.751115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.852 [2024-11-27 08:09:59.751124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.852 [2024-11-27 08:09:59.751131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.852 [2024-11-27 08:09:59.751137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.852 [2024-11-27 08:09:59.763522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.852 [2024-11-27 08:09:59.763897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.852 [2024-11-27 08:09:59.763912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.852 [2024-11-27 08:09:59.763919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.852 [2024-11-27 08:09:59.764102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.852 [2024-11-27 08:09:59.764289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.852 [2024-11-27 08:09:59.764297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.852 [2024-11-27 08:09:59.764303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.852 [2024-11-27 08:09:59.764309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.852 [2024-11-27 08:09:59.776477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.852 [2024-11-27 08:09:59.776859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.852 [2024-11-27 08:09:59.776875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.852 [2024-11-27 08:09:59.776882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.852 [2024-11-27 08:09:59.777060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.852 [2024-11-27 08:09:59.777234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.852 [2024-11-27 08:09:59.777242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.852 [2024-11-27 08:09:59.777248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.852 [2024-11-27 08:09:59.777254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.852 [2024-11-27 08:09:59.789449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.852 [2024-11-27 08:09:59.789852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.852 [2024-11-27 08:09:59.789869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.852 [2024-11-27 08:09:59.789876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.852 [2024-11-27 08:09:59.790054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.852 [2024-11-27 08:09:59.790229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.852 [2024-11-27 08:09:59.790237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.852 [2024-11-27 08:09:59.790243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.852 [2024-11-27 08:09:59.790249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.852 [2024-11-27 08:09:59.802358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.852 [2024-11-27 08:09:59.802782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.852 [2024-11-27 08:09:59.802797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.852 [2024-11-27 08:09:59.802808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.852 [2024-11-27 08:09:59.802987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.852 [2024-11-27 08:09:59.803161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.852 [2024-11-27 08:09:59.803169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.852 [2024-11-27 08:09:59.803175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.852 [2024-11-27 08:09:59.803182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.852 [2024-11-27 08:09:59.815344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.852 [2024-11-27 08:09:59.815647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.852 [2024-11-27 08:09:59.815664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.852 [2024-11-27 08:09:59.815670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.852 [2024-11-27 08:09:59.815842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.852 [2024-11-27 08:09:59.816025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.852 [2024-11-27 08:09:59.816034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.852 [2024-11-27 08:09:59.816040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.852 [2024-11-27 08:09:59.816046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.852 9608.00 IOPS, 37.53 MiB/s [2024-11-27T07:09:59.961Z] [2024-11-27 08:09:59.828326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.852 [2024-11-27 08:09:59.828623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.852 [2024-11-27 08:09:59.828640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.852 [2024-11-27 08:09:59.828647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.852 [2024-11-27 08:09:59.828819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.852 [2024-11-27 08:09:59.829000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.852 [2024-11-27 08:09:59.829009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.852 [2024-11-27 08:09:59.829033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.852 [2024-11-27 08:09:59.829040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.852 [2024-11-27 08:09:59.841288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.852 [2024-11-27 08:09:59.841578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.852 [2024-11-27 08:09:59.841594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.852 [2024-11-27 08:09:59.841601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.852 [2024-11-27 08:09:59.841773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.852 [2024-11-27 08:09:59.841957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.852 [2024-11-27 08:09:59.841966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.852 [2024-11-27 08:09:59.841973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.852 [2024-11-27 08:09:59.841979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.852 [2024-11-27 08:09:59.854208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.852 [2024-11-27 08:09:59.854548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.852 [2024-11-27 08:09:59.854565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.852 [2024-11-27 08:09:59.854572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.852 [2024-11-27 08:09:59.854745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.852 [2024-11-27 08:09:59.854918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.852 [2024-11-27 08:09:59.854927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.852 [2024-11-27 08:09:59.854934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.852 [2024-11-27 08:09:59.854940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.853 [2024-11-27 08:09:59.867138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.853 [2024-11-27 08:09:59.867429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.853 [2024-11-27 08:09:59.867446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.853 [2024-11-27 08:09:59.867454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.853 [2024-11-27 08:09:59.867626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.853 [2024-11-27 08:09:59.867800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.853 [2024-11-27 08:09:59.867808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.853 [2024-11-27 08:09:59.867815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.853 [2024-11-27 08:09:59.867821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.853 [2024-11-27 08:09:59.880205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.853 [2024-11-27 08:09:59.880498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.853 [2024-11-27 08:09:59.880515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.853 [2024-11-27 08:09:59.880522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.853 [2024-11-27 08:09:59.880695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.853 [2024-11-27 08:09:59.880869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.853 [2024-11-27 08:09:59.880878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.853 [2024-11-27 08:09:59.880887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.853 [2024-11-27 08:09:59.880894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.853 [2024-11-27 08:09:59.893331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.853 [2024-11-27 08:09:59.893744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.853 [2024-11-27 08:09:59.893760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.853 [2024-11-27 08:09:59.893767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.853 [2024-11-27 08:09:59.893946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.853 [2024-11-27 08:09:59.894133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.853 [2024-11-27 08:09:59.894141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.853 [2024-11-27 08:09:59.894148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.853 [2024-11-27 08:09:59.894155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.853 [2024-11-27 08:09:59.906385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.853 [2024-11-27 08:09:59.906835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.853 [2024-11-27 08:09:59.906879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.853 [2024-11-27 08:09:59.906901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.853 [2024-11-27 08:09:59.907497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.853 [2024-11-27 08:09:59.908075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.853 [2024-11-27 08:09:59.908083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.853 [2024-11-27 08:09:59.908090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.853 [2024-11-27 08:09:59.908096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.853 [2024-11-27 08:09:59.919498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.853 [2024-11-27 08:09:59.919978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.853 [2024-11-27 08:09:59.920024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.853 [2024-11-27 08:09:59.920047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.853 [2024-11-27 08:09:59.920631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.853 [2024-11-27 08:09:59.921144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.853 [2024-11-27 08:09:59.921153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.853 [2024-11-27 08:09:59.921159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.853 [2024-11-27 08:09:59.921166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.853 [2024-11-27 08:09:59.932392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.853 [2024-11-27 08:09:59.932757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.853 [2024-11-27 08:09:59.932773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.853 [2024-11-27 08:09:59.932780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.853 [2024-11-27 08:09:59.932958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.853 [2024-11-27 08:09:59.933132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.853 [2024-11-27 08:09:59.933140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.853 [2024-11-27 08:09:59.933146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.853 [2024-11-27 08:09:59.933152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.853 [2024-11-27 08:09:59.945282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.853 [2024-11-27 08:09:59.945691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.853 [2024-11-27 08:09:59.945734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:05.853 [2024-11-27 08:09:59.945757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:05.853 [2024-11-27 08:09:59.946276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:05.853 [2024-11-27 08:09:59.946450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.853 [2024-11-27 08:09:59.946458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.853 [2024-11-27 08:09:59.946464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.853 [2024-11-27 08:09:59.946470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.853 [2024-11-27 08:09:59.958457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.853 [2024-11-27 08:09:59.958851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-27 08:09:59.958868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.113 [2024-11-27 08:09:59.958876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.113 [2024-11-27 08:09:59.959061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.113 [2024-11-27 08:09:59.959244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.113 [2024-11-27 08:09:59.959252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.113 [2024-11-27 08:09:59.959260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.113 [2024-11-27 08:09:59.959266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.113 [2024-11-27 08:09:59.971491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.113 [2024-11-27 08:09:59.971940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-27 08:09:59.972000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.113 [2024-11-27 08:09:59.972037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.113 [2024-11-27 08:09:59.972416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.113 [2024-11-27 08:09:59.972590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.113 [2024-11-27 08:09:59.972598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.113 [2024-11-27 08:09:59.972605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.113 [2024-11-27 08:09:59.972611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.113 [2024-11-27 08:09:59.984369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.113 [2024-11-27 08:09:59.984818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-27 08:09:59.984835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.113 [2024-11-27 08:09:59.984842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.113 [2024-11-27 08:09:59.985036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.113 [2024-11-27 08:09:59.985215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.113 [2024-11-27 08:09:59.985224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.113 [2024-11-27 08:09:59.985230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.113 [2024-11-27 08:09:59.985236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.113 [2024-11-27 08:09:59.997508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.113 [2024-11-27 08:09:59.997985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-27 08:09:59.998032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.113 [2024-11-27 08:09:59.998056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.113 [2024-11-27 08:09:59.998638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.113 [2024-11-27 08:09:59.999200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.113 [2024-11-27 08:09:59.999209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.113 [2024-11-27 08:09:59.999215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.113 [2024-11-27 08:09:59.999222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.113 [2024-11-27 08:10:00.010630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.113 [2024-11-27 08:10:00.011037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-27 08:10:00.011055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.113 [2024-11-27 08:10:00.011062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.113 [2024-11-27 08:10:00.011241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.113 [2024-11-27 08:10:00.011424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.113 [2024-11-27 08:10:00.011432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.113 [2024-11-27 08:10:00.011439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.113 [2024-11-27 08:10:00.011446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.113 [2024-11-27 08:10:00.023724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.113 [2024-11-27 08:10:00.024085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-27 08:10:00.024103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.113 [2024-11-27 08:10:00.024112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.113 [2024-11-27 08:10:00.024291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.113 [2024-11-27 08:10:00.024471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.113 [2024-11-27 08:10:00.024479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.113 [2024-11-27 08:10:00.024486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.113 [2024-11-27 08:10:00.024493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.113 [2024-11-27 08:10:00.036869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.113 [2024-11-27 08:10:00.037223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-27 08:10:00.037241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.113 [2024-11-27 08:10:00.037250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.113 [2024-11-27 08:10:00.037429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.113 [2024-11-27 08:10:00.037608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.113 [2024-11-27 08:10:00.037617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.113 [2024-11-27 08:10:00.037625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.113 [2024-11-27 08:10:00.037632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.113 [2024-11-27 08:10:00.050063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.113 [2024-11-27 08:10:00.050491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-27 08:10:00.050508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.113 [2024-11-27 08:10:00.050515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.113 [2024-11-27 08:10:00.050693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.113 [2024-11-27 08:10:00.050873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.113 [2024-11-27 08:10:00.050882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.113 [2024-11-27 08:10:00.050889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.113 [2024-11-27 08:10:00.050899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.113 [2024-11-27 08:10:00.063209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.113 [2024-11-27 08:10:00.063579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-27 08:10:00.063596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.113 [2024-11-27 08:10:00.063603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.113 [2024-11-27 08:10:00.063783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.113 [2024-11-27 08:10:00.063967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.113 [2024-11-27 08:10:00.063976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.113 [2024-11-27 08:10:00.063983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.113 [2024-11-27 08:10:00.063989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.113 [2024-11-27 08:10:00.076383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.113 [2024-11-27 08:10:00.076735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-27 08:10:00.076752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.113 [2024-11-27 08:10:00.076760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.113 [2024-11-27 08:10:00.076939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.113 [2024-11-27 08:10:00.077124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.113 [2024-11-27 08:10:00.077133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.113 [2024-11-27 08:10:00.077140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.113 [2024-11-27 08:10:00.077146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.113 [2024-11-27 08:10:00.089529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.113 [2024-11-27 08:10:00.089971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-27 08:10:00.089988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.113 [2024-11-27 08:10:00.089996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.113 [2024-11-27 08:10:00.090175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.113 [2024-11-27 08:10:00.090354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.113 [2024-11-27 08:10:00.090363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.113 [2024-11-27 08:10:00.090371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.113 [2024-11-27 08:10:00.090377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.113 [2024-11-27 08:10:00.102576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.113 [2024-11-27 08:10:00.102957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-27 08:10:00.102974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.113 [2024-11-27 08:10:00.102981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.114 [2024-11-27 08:10:00.103159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.114 [2024-11-27 08:10:00.103337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.114 [2024-11-27 08:10:00.103345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.114 [2024-11-27 08:10:00.103352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.114 [2024-11-27 08:10:00.103358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.114 [2024-11-27 08:10:00.115787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.114 [2024-11-27 08:10:00.116130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-27 08:10:00.116147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.114 [2024-11-27 08:10:00.116155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.114 [2024-11-27 08:10:00.116334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.114 [2024-11-27 08:10:00.116512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.114 [2024-11-27 08:10:00.116520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.114 [2024-11-27 08:10:00.116526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.114 [2024-11-27 08:10:00.116533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.114 [2024-11-27 08:10:00.128922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.114 [2024-11-27 08:10:00.129377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-27 08:10:00.129421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.114 [2024-11-27 08:10:00.129444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.114 [2024-11-27 08:10:00.130042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.114 [2024-11-27 08:10:00.130346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.114 [2024-11-27 08:10:00.130354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.114 [2024-11-27 08:10:00.130361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.114 [2024-11-27 08:10:00.130367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.114 [2024-11-27 08:10:00.141991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.114 [2024-11-27 08:10:00.142459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-27 08:10:00.142476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.114 [2024-11-27 08:10:00.142483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.114 [2024-11-27 08:10:00.142665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.114 [2024-11-27 08:10:00.142844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.114 [2024-11-27 08:10:00.142852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.114 [2024-11-27 08:10:00.142859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.114 [2024-11-27 08:10:00.142866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.114 [2024-11-27 08:10:00.155132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.114 [2024-11-27 08:10:00.155574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-27 08:10:00.155591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.114 [2024-11-27 08:10:00.155598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.114 [2024-11-27 08:10:00.155772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.114 [2024-11-27 08:10:00.155945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.114 [2024-11-27 08:10:00.155959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.114 [2024-11-27 08:10:00.155966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.114 [2024-11-27 08:10:00.155990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.114 [2024-11-27 08:10:00.168127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.114 [2024-11-27 08:10:00.168558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-27 08:10:00.168603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.114 [2024-11-27 08:10:00.168625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.114 [2024-11-27 08:10:00.169223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.114 [2024-11-27 08:10:00.169720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.114 [2024-11-27 08:10:00.169728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.114 [2024-11-27 08:10:00.169735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.114 [2024-11-27 08:10:00.169741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.114 [2024-11-27 08:10:00.181188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.114 [2024-11-27 08:10:00.181633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-27 08:10:00.181650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.114 [2024-11-27 08:10:00.181657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.114 [2024-11-27 08:10:00.181830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.114 [2024-11-27 08:10:00.182026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.114 [2024-11-27 08:10:00.182038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.114 [2024-11-27 08:10:00.182044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.114 [2024-11-27 08:10:00.182050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.114 [2024-11-27 08:10:00.194256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.114 [2024-11-27 08:10:00.194701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-27 08:10:00.194718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.114 [2024-11-27 08:10:00.194725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.114 [2024-11-27 08:10:00.194903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.114 [2024-11-27 08:10:00.195088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.114 [2024-11-27 08:10:00.195097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.114 [2024-11-27 08:10:00.195103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.114 [2024-11-27 08:10:00.195110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.114 [2024-11-27 08:10:00.207430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.114 [2024-11-27 08:10:00.207849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-27 08:10:00.207865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.114 [2024-11-27 08:10:00.207872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.114 [2024-11-27 08:10:00.208062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.114 [2024-11-27 08:10:00.208242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.114 [2024-11-27 08:10:00.208250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.114 [2024-11-27 08:10:00.208257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.114 [2024-11-27 08:10:00.208263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.114 [2024-11-27 08:10:00.220518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.373 [2024-11-27 08:10:00.220960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.373 [2024-11-27 08:10:00.220978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.373 [2024-11-27 08:10:00.220986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.373 [2024-11-27 08:10:00.221166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.373 [2024-11-27 08:10:00.221347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.373 [2024-11-27 08:10:00.221355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.373 [2024-11-27 08:10:00.221362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.373 [2024-11-27 08:10:00.221372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.373 [2024-11-27 08:10:00.233598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.373 [2024-11-27 08:10:00.234038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.373 [2024-11-27 08:10:00.234077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.373 [2024-11-27 08:10:00.234102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.373 [2024-11-27 08:10:00.234680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.373 [2024-11-27 08:10:00.234859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.373 [2024-11-27 08:10:00.234867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.373 [2024-11-27 08:10:00.234874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.373 [2024-11-27 08:10:00.234880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.373 [2024-11-27 08:10:00.246822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.373 [2024-11-27 08:10:00.247266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.374 [2024-11-27 08:10:00.247306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.374 [2024-11-27 08:10:00.247331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.374 [2024-11-27 08:10:00.247897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.374 [2024-11-27 08:10:00.248080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.374 [2024-11-27 08:10:00.248089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.374 [2024-11-27 08:10:00.248096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.374 [2024-11-27 08:10:00.248102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.374 [2024-11-27 08:10:00.260000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.374 [2024-11-27 08:10:00.260427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.374 [2024-11-27 08:10:00.260471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.374 [2024-11-27 08:10:00.260493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.374 [2024-11-27 08:10:00.260935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.374 [2024-11-27 08:10:00.261120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.374 [2024-11-27 08:10:00.261129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.374 [2024-11-27 08:10:00.261135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.374 [2024-11-27 08:10:00.261142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.374 [2024-11-27 08:10:00.272993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.374 [2024-11-27 08:10:00.273340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.374 [2024-11-27 08:10:00.273356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.374 [2024-11-27 08:10:00.273363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.374 [2024-11-27 08:10:00.273536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.374 [2024-11-27 08:10:00.273708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.374 [2024-11-27 08:10:00.273717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.374 [2024-11-27 08:10:00.273723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.374 [2024-11-27 08:10:00.273729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.374 [2024-11-27 08:10:00.286119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.374 [2024-11-27 08:10:00.286544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.374 [2024-11-27 08:10:00.286561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.374 [2024-11-27 08:10:00.286568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.374 [2024-11-27 08:10:00.286747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.374 [2024-11-27 08:10:00.286926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.374 [2024-11-27 08:10:00.286934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.374 [2024-11-27 08:10:00.286941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.374 [2024-11-27 08:10:00.286953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.374 [2024-11-27 08:10:00.299298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.374 [2024-11-27 08:10:00.299759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.374 [2024-11-27 08:10:00.299804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.374 [2024-11-27 08:10:00.299826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.374 [2024-11-27 08:10:00.300423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.374 [2024-11-27 08:10:00.300986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.374 [2024-11-27 08:10:00.300995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.374 [2024-11-27 08:10:00.301001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.374 [2024-11-27 08:10:00.301007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.374 [2024-11-27 08:10:00.312383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.374 [2024-11-27 08:10:00.312831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.374 [2024-11-27 08:10:00.312875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.374 [2024-11-27 08:10:00.312897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.374 [2024-11-27 08:10:00.313503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.374 [2024-11-27 08:10:00.314046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.374 [2024-11-27 08:10:00.314054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.374 [2024-11-27 08:10:00.314060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.374 [2024-11-27 08:10:00.314067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.374 [2024-11-27 08:10:00.325529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.374 [2024-11-27 08:10:00.325988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.374 [2024-11-27 08:10:00.326032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.374 [2024-11-27 08:10:00.326054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.374 [2024-11-27 08:10:00.326635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.374 [2024-11-27 08:10:00.327118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.374 [2024-11-27 08:10:00.327127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.374 [2024-11-27 08:10:00.327133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.374 [2024-11-27 08:10:00.327140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.374 [2024-11-27 08:10:00.338645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.374 [2024-11-27 08:10:00.339081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.374 [2024-11-27 08:10:00.339098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.374 [2024-11-27 08:10:00.339105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.374 [2024-11-27 08:10:00.339278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.374 [2024-11-27 08:10:00.339451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.374 [2024-11-27 08:10:00.339459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.374 [2024-11-27 08:10:00.339465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.374 [2024-11-27 08:10:00.339472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.374 [2024-11-27 08:10:00.351666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.374 [2024-11-27 08:10:00.352120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.374 [2024-11-27 08:10:00.352165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.374 [2024-11-27 08:10:00.352188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.374 [2024-11-27 08:10:00.352770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.374 [2024-11-27 08:10:00.353032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.374 [2024-11-27 08:10:00.353044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.374 [2024-11-27 08:10:00.353051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.374 [2024-11-27 08:10:00.353058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.374 [2024-11-27 08:10:00.364770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.374 [2024-11-27 08:10:00.365230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.374 [2024-11-27 08:10:00.365275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.374 [2024-11-27 08:10:00.365297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.374 [2024-11-27 08:10:00.365866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.374 [2024-11-27 08:10:00.366049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.374 [2024-11-27 08:10:00.366058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.374 [2024-11-27 08:10:00.366064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.374 [2024-11-27 08:10:00.366071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.374 [2024-11-27 08:10:00.377913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.374 [2024-11-27 08:10:00.378271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.375 [2024-11-27 08:10:00.378288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.375 [2024-11-27 08:10:00.378295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.375 [2024-11-27 08:10:00.378485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.375 [2024-11-27 08:10:00.378670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.375 [2024-11-27 08:10:00.378678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.375 [2024-11-27 08:10:00.378685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.375 [2024-11-27 08:10:00.378691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.375 [2024-11-27 08:10:00.390972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.375 [2024-11-27 08:10:00.391396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.375 [2024-11-27 08:10:00.391413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.375 [2024-11-27 08:10:00.391420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.375 [2024-11-27 08:10:00.391592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.375 [2024-11-27 08:10:00.391765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.375 [2024-11-27 08:10:00.391773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.375 [2024-11-27 08:10:00.391779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.375 [2024-11-27 08:10:00.391788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.375 [2024-11-27 08:10:00.403786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.375 [2024-11-27 08:10:00.404248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.375 [2024-11-27 08:10:00.404292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.375 [2024-11-27 08:10:00.404315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.375 [2024-11-27 08:10:00.404895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.375 [2024-11-27 08:10:00.405386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.375 [2024-11-27 08:10:00.405398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.375 [2024-11-27 08:10:00.405407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.375 [2024-11-27 08:10:00.405416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.375 [2024-11-27 08:10:00.417485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.375 [2024-11-27 08:10:00.417829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.375 [2024-11-27 08:10:00.417845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.375 [2024-11-27 08:10:00.417852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.375 [2024-11-27 08:10:00.418043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.375 [2024-11-27 08:10:00.418217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.375 [2024-11-27 08:10:00.418225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.375 [2024-11-27 08:10:00.418231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.375 [2024-11-27 08:10:00.418237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.375 [2024-11-27 08:10:00.430376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.375 [2024-11-27 08:10:00.430805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.375 [2024-11-27 08:10:00.430822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.375 [2024-11-27 08:10:00.430829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.375 [2024-11-27 08:10:00.431008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.375 [2024-11-27 08:10:00.431182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.375 [2024-11-27 08:10:00.431190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.375 [2024-11-27 08:10:00.431197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.375 [2024-11-27 08:10:00.431203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.375 [2024-11-27 08:10:00.443194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.375 [2024-11-27 08:10:00.443645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.375 [2024-11-27 08:10:00.443665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.375 [2024-11-27 08:10:00.443672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.375 [2024-11-27 08:10:00.443844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.375 [2024-11-27 08:10:00.444022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.375 [2024-11-27 08:10:00.444031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.375 [2024-11-27 08:10:00.444037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.375 [2024-11-27 08:10:00.444044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.375 [2024-11-27 08:10:00.456132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.375 [2024-11-27 08:10:00.456576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.375 [2024-11-27 08:10:00.456622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.375 [2024-11-27 08:10:00.456645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.375 [2024-11-27 08:10:00.457108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.375 [2024-11-27 08:10:00.457281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.375 [2024-11-27 08:10:00.457289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.375 [2024-11-27 08:10:00.457295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.375 [2024-11-27 08:10:00.457301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.375 [2024-11-27 08:10:00.468983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.375 [2024-11-27 08:10:00.469449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.375 [2024-11-27 08:10:00.469494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.375 [2024-11-27 08:10:00.469516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.375 [2024-11-27 08:10:00.470115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.375 [2024-11-27 08:10:00.470517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.375 [2024-11-27 08:10:00.470525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.375 [2024-11-27 08:10:00.470531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.375 [2024-11-27 08:10:00.470538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.636 [2024-11-27 08:10:00.482039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.636 [2024-11-27 08:10:00.482480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-11-27 08:10:00.482495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.636 [2024-11-27 08:10:00.482502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.636 [2024-11-27 08:10:00.482678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.636 [2024-11-27 08:10:00.482851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.636 [2024-11-27 08:10:00.482859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.636 [2024-11-27 08:10:00.482865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.636 [2024-11-27 08:10:00.482871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.636 [2024-11-27 08:10:00.495142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.636 [2024-11-27 08:10:00.495570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-11-27 08:10:00.495587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.636 [2024-11-27 08:10:00.495595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.636 [2024-11-27 08:10:00.495769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.636 [2024-11-27 08:10:00.495943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.636 [2024-11-27 08:10:00.495958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.636 [2024-11-27 08:10:00.495965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.636 [2024-11-27 08:10:00.495971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.636 [2024-11-27 08:10:00.508276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.636 [2024-11-27 08:10:00.508720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-11-27 08:10:00.508768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.636 [2024-11-27 08:10:00.508790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.636 [2024-11-27 08:10:00.509340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.636 [2024-11-27 08:10:00.509519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.636 [2024-11-27 08:10:00.509527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.636 [2024-11-27 08:10:00.509534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.636 [2024-11-27 08:10:00.509541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.636 [2024-11-27 08:10:00.521315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.636 [2024-11-27 08:10:00.521780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-11-27 08:10:00.521798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.636 [2024-11-27 08:10:00.521805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.636 [2024-11-27 08:10:00.521991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.636 [2024-11-27 08:10:00.522174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.636 [2024-11-27 08:10:00.522186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.636 [2024-11-27 08:10:00.522192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.636 [2024-11-27 08:10:00.522199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.636 [2024-11-27 08:10:00.534218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.636 [2024-11-27 08:10:00.534668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-11-27 08:10:00.534684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.636 [2024-11-27 08:10:00.534691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.636 [2024-11-27 08:10:00.534854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.636 [2024-11-27 08:10:00.535042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.636 [2024-11-27 08:10:00.535051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.636 [2024-11-27 08:10:00.535057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.636 [2024-11-27 08:10:00.535064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.636 [2024-11-27 08:10:00.547115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.636 [2024-11-27 08:10:00.547553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-11-27 08:10:00.547597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.636 [2024-11-27 08:10:00.547620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.636 [2024-11-27 08:10:00.548217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.636 [2024-11-27 08:10:00.548655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.636 [2024-11-27 08:10:00.548662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.636 [2024-11-27 08:10:00.548668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.636 [2024-11-27 08:10:00.548674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.636 [2024-11-27 08:10:00.560064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.636 [2024-11-27 08:10:00.560501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-11-27 08:10:00.560538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.636 [2024-11-27 08:10:00.560563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.636 [2024-11-27 08:10:00.561161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.636 [2024-11-27 08:10:00.561746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.636 [2024-11-27 08:10:00.561770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.636 [2024-11-27 08:10:00.561791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.636 [2024-11-27 08:10:00.561810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.636 [2024-11-27 08:10:00.572970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.636 [2024-11-27 08:10:00.573395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-11-27 08:10:00.573411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.636 [2024-11-27 08:10:00.573418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.636 [2024-11-27 08:10:00.573581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.636 [2024-11-27 08:10:00.573746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.636 [2024-11-27 08:10:00.573754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.636 [2024-11-27 08:10:00.573760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.636 [2024-11-27 08:10:00.573765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.636 [2024-11-27 08:10:00.585809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.636 [2024-11-27 08:10:00.586271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-11-27 08:10:00.586315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.637 [2024-11-27 08:10:00.586337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.637 [2024-11-27 08:10:00.586808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.637 [2024-11-27 08:10:00.586986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.637 [2024-11-27 08:10:00.586995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.637 [2024-11-27 08:10:00.587001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.637 [2024-11-27 08:10:00.587008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.637 [2024-11-27 08:10:00.598688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.637 [2024-11-27 08:10:00.599132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-11-27 08:10:00.599149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.637 [2024-11-27 08:10:00.599156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.637 [2024-11-27 08:10:00.599329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.637 [2024-11-27 08:10:00.599502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.637 [2024-11-27 08:10:00.599510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.637 [2024-11-27 08:10:00.599516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.637 [2024-11-27 08:10:00.599522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.637 [2024-11-27 08:10:00.611596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.637 [2024-11-27 08:10:00.612005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-11-27 08:10:00.612027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.637 [2024-11-27 08:10:00.612034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.637 [2024-11-27 08:10:00.612198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.637 [2024-11-27 08:10:00.612362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.637 [2024-11-27 08:10:00.612369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.637 [2024-11-27 08:10:00.612375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.637 [2024-11-27 08:10:00.612381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.637 [2024-11-27 08:10:00.624478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.637 [2024-11-27 08:10:00.624923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-11-27 08:10:00.624940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.637 [2024-11-27 08:10:00.624952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.637 [2024-11-27 08:10:00.625126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.637 [2024-11-27 08:10:00.625300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.637 [2024-11-27 08:10:00.625309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.637 [2024-11-27 08:10:00.625315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.637 [2024-11-27 08:10:00.625321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.637 [2024-11-27 08:10:00.637501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.637 [2024-11-27 08:10:00.637939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-11-27 08:10:00.637996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.637 [2024-11-27 08:10:00.638019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.637 [2024-11-27 08:10:00.638601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.637 [2024-11-27 08:10:00.639146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.637 [2024-11-27 08:10:00.639155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.637 [2024-11-27 08:10:00.639162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.637 [2024-11-27 08:10:00.639168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.637 [2024-11-27 08:10:00.650385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.637 [2024-11-27 08:10:00.650809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-11-27 08:10:00.650826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.637 [2024-11-27 08:10:00.650833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.637 [2024-11-27 08:10:00.651017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.637 [2024-11-27 08:10:00.651193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.637 [2024-11-27 08:10:00.651201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.637 [2024-11-27 08:10:00.651207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.637 [2024-11-27 08:10:00.651213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.637 [2024-11-27 08:10:00.663228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.637 [2024-11-27 08:10:00.663641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-11-27 08:10:00.663658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.637 [2024-11-27 08:10:00.663665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.637 [2024-11-27 08:10:00.663840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.637 [2024-11-27 08:10:00.664018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.637 [2024-11-27 08:10:00.664027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.637 [2024-11-27 08:10:00.664033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.637 [2024-11-27 08:10:00.664039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.637 [2024-11-27 08:10:00.676111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.637 [2024-11-27 08:10:00.676514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-11-27 08:10:00.676531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.637 [2024-11-27 08:10:00.676537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.637 [2024-11-27 08:10:00.676701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.637 [2024-11-27 08:10:00.676863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.637 [2024-11-27 08:10:00.676871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.637 [2024-11-27 08:10:00.676877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.637 [2024-11-27 08:10:00.676883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.637 [2024-11-27 08:10:00.689164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.637 [2024-11-27 08:10:00.689593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-11-27 08:10:00.689637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.637 [2024-11-27 08:10:00.689659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.637 [2024-11-27 08:10:00.690085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.637 [2024-11-27 08:10:00.690264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.637 [2024-11-27 08:10:00.690273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.637 [2024-11-27 08:10:00.690283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.637 [2024-11-27 08:10:00.690290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.637 [2024-11-27 08:10:00.702039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.637 [2024-11-27 08:10:00.702442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-11-27 08:10:00.702486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.637 [2024-11-27 08:10:00.702509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.637 [2024-11-27 08:10:00.703055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.637 [2024-11-27 08:10:00.703229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.637 [2024-11-27 08:10:00.703237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.637 [2024-11-27 08:10:00.703243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.637 [2024-11-27 08:10:00.703249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.637 [2024-11-27 08:10:00.714866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.637 [2024-11-27 08:10:00.715295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-11-27 08:10:00.715312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.638 [2024-11-27 08:10:00.715319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.638 [2024-11-27 08:10:00.715492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.638 [2024-11-27 08:10:00.715665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.638 [2024-11-27 08:10:00.715673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.638 [2024-11-27 08:10:00.715679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.638 [2024-11-27 08:10:00.715685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.638 [2024-11-27 08:10:00.727785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.638 [2024-11-27 08:10:00.728214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-11-27 08:10:00.728230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.638 [2024-11-27 08:10:00.728237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.638 [2024-11-27 08:10:00.728409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.638 [2024-11-27 08:10:00.728582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.638 [2024-11-27 08:10:00.728590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.638 [2024-11-27 08:10:00.728597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.638 [2024-11-27 08:10:00.728603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.638 [2024-11-27 08:10:00.740819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.638 [2024-11-27 08:10:00.741253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-11-27 08:10:00.741270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.638 [2024-11-27 08:10:00.741278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.638 [2024-11-27 08:10:00.741458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.638 [2024-11-27 08:10:00.741637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.638 [2024-11-27 08:10:00.741647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.638 [2024-11-27 08:10:00.741654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.638 [2024-11-27 08:10:00.741660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.899 [2024-11-27 08:10:00.753852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.899 [2024-11-27 08:10:00.754287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-11-27 08:10:00.754305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.899 [2024-11-27 08:10:00.754313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.899 [2024-11-27 08:10:00.754491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.899 [2024-11-27 08:10:00.754669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.899 [2024-11-27 08:10:00.754678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.899 [2024-11-27 08:10:00.754685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.899 [2024-11-27 08:10:00.754692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.899 [2024-11-27 08:10:00.766913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.899 [2024-11-27 08:10:00.767409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-11-27 08:10:00.767455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.899 [2024-11-27 08:10:00.767478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.899 [2024-11-27 08:10:00.768050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.899 [2024-11-27 08:10:00.768238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.899 [2024-11-27 08:10:00.768247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.899 [2024-11-27 08:10:00.768253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.899 [2024-11-27 08:10:00.768259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.899 [2024-11-27 08:10:00.779991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.899 [2024-11-27 08:10:00.780333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-11-27 08:10:00.780350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.899 [2024-11-27 08:10:00.780360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.899 [2024-11-27 08:10:00.780534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.899 [2024-11-27 08:10:00.780707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.899 [2024-11-27 08:10:00.780715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.899 [2024-11-27 08:10:00.780721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.899 [2024-11-27 08:10:00.780727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.899 [2024-11-27 08:10:00.792891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.899 [2024-11-27 08:10:00.793242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-11-27 08:10:00.793259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.899 [2024-11-27 08:10:00.793266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.899 [2024-11-27 08:10:00.793437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.899 [2024-11-27 08:10:00.793611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.899 [2024-11-27 08:10:00.793619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.899 [2024-11-27 08:10:00.793624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.899 [2024-11-27 08:10:00.793630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.899 [2024-11-27 08:10:00.805852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.899 [2024-11-27 08:10:00.806287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-11-27 08:10:00.806332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.899 [2024-11-27 08:10:00.806354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.899 [2024-11-27 08:10:00.806880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.899 [2024-11-27 08:10:00.807059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.899 [2024-11-27 08:10:00.807068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.899 [2024-11-27 08:10:00.807075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.899 [2024-11-27 08:10:00.807081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.899 [2024-11-27 08:10:00.818793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.899 [2024-11-27 08:10:00.819230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-11-27 08:10:00.819247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.899 [2024-11-27 08:10:00.819254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.899 [2024-11-27 08:10:00.819426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.899 [2024-11-27 08:10:00.819605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.899 [2024-11-27 08:10:00.819613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.899 [2024-11-27 08:10:00.819619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.899 [2024-11-27 08:10:00.819625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.899 7206.00 IOPS, 28.15 MiB/s [2024-11-27T07:10:01.008Z] [2024-11-27 08:10:00.831599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.899 [2024-11-27 08:10:00.832039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-11-27 08:10:00.832085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.899 [2024-11-27 08:10:00.832107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.899 [2024-11-27 08:10:00.832688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.899 [2024-11-27 08:10:00.833260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.899 [2024-11-27 08:10:00.833269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.899 [2024-11-27 08:10:00.833275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.899 [2024-11-27 08:10:00.833282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.899 [2024-11-27 08:10:00.844503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.899 [2024-11-27 08:10:00.844927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-11-27 08:10:00.844943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.899 [2024-11-27 08:10:00.844957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.899 [2024-11-27 08:10:00.845130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.899 [2024-11-27 08:10:00.845302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.899 [2024-11-27 08:10:00.845310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.899 [2024-11-27 08:10:00.845316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.899 [2024-11-27 08:10:00.845322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.899 [2024-11-27 08:10:00.857392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.899 [2024-11-27 08:10:00.857828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-11-27 08:10:00.857872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.900 [2024-11-27 08:10:00.857894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.900 [2024-11-27 08:10:00.858320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.900 [2024-11-27 08:10:00.858494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.900 [2024-11-27 08:10:00.858502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.900 [2024-11-27 08:10:00.858512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.900 [2024-11-27 08:10:00.858518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.900 [2024-11-27 08:10:00.870261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.900 [2024-11-27 08:10:00.870683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-11-27 08:10:00.870699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.900 [2024-11-27 08:10:00.870707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.900 [2024-11-27 08:10:00.870870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.900 [2024-11-27 08:10:00.871060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.900 [2024-11-27 08:10:00.871069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.900 [2024-11-27 08:10:00.871076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.900 [2024-11-27 08:10:00.871082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.900 [2024-11-27 08:10:00.883063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.900 [2024-11-27 08:10:00.883508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-11-27 08:10:00.883557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.900 [2024-11-27 08:10:00.883580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.900 [2024-11-27 08:10:00.884137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.900 [2024-11-27 08:10:00.884312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.900 [2024-11-27 08:10:00.884320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.900 [2024-11-27 08:10:00.884326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.900 [2024-11-27 08:10:00.884332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.900 [2024-11-27 08:10:00.895964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.900 [2024-11-27 08:10:00.896392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-11-27 08:10:00.896408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.900 [2024-11-27 08:10:00.896415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.900 [2024-11-27 08:10:00.896588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.900 [2024-11-27 08:10:00.896760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.900 [2024-11-27 08:10:00.896768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.900 [2024-11-27 08:10:00.896775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.900 [2024-11-27 08:10:00.896781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.900 [2024-11-27 08:10:00.909019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.900 [2024-11-27 08:10:00.909432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-11-27 08:10:00.909449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.900 [2024-11-27 08:10:00.909456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.900 [2024-11-27 08:10:00.909634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.900 [2024-11-27 08:10:00.909813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.900 [2024-11-27 08:10:00.909821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.900 [2024-11-27 08:10:00.909827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.900 [2024-11-27 08:10:00.909834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.900 [2024-11-27 08:10:00.922198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.900 [2024-11-27 08:10:00.922620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-11-27 08:10:00.922637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.900 [2024-11-27 08:10:00.922644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.900 [2024-11-27 08:10:00.922822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.900 [2024-11-27 08:10:00.923013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.900 [2024-11-27 08:10:00.923022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.900 [2024-11-27 08:10:00.923030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.900 [2024-11-27 08:10:00.923036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.900 [2024-11-27 08:10:00.935272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.900 [2024-11-27 08:10:00.935737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-11-27 08:10:00.935754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.900 [2024-11-27 08:10:00.935761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.900 [2024-11-27 08:10:00.935939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.900 [2024-11-27 08:10:00.936125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.900 [2024-11-27 08:10:00.936135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.900 [2024-11-27 08:10:00.936141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.900 [2024-11-27 08:10:00.936148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.900 [2024-11-27 08:10:00.948365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.900 [2024-11-27 08:10:00.948732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-11-27 08:10:00.948749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.900 [2024-11-27 08:10:00.948760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.900 [2024-11-27 08:10:00.948939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.900 [2024-11-27 08:10:00.949124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.900 [2024-11-27 08:10:00.949134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.900 [2024-11-27 08:10:00.949141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.900 [2024-11-27 08:10:00.949147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.900 [2024-11-27 08:10:00.961562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.900 [2024-11-27 08:10:00.961976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-11-27 08:10:00.961993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.900 [2024-11-27 08:10:00.962001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.900 [2024-11-27 08:10:00.962179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.900 [2024-11-27 08:10:00.962358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.900 [2024-11-27 08:10:00.962366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.900 [2024-11-27 08:10:00.962373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.900 [2024-11-27 08:10:00.962380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.900 [2024-11-27 08:10:00.974640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.900 [2024-11-27 08:10:00.975078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-11-27 08:10:00.975095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.900 [2024-11-27 08:10:00.975102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.900 [2024-11-27 08:10:00.975281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.900 [2024-11-27 08:10:00.975459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.900 [2024-11-27 08:10:00.975467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.900 [2024-11-27 08:10:00.975473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.900 [2024-11-27 08:10:00.975480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.900 [2024-11-27 08:10:00.987712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.901 [2024-11-27 08:10:00.988065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-11-27 08:10:00.988082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.901 [2024-11-27 08:10:00.988089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.901 [2024-11-27 08:10:00.988268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.901 [2024-11-27 08:10:00.988450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.901 [2024-11-27 08:10:00.988458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.901 [2024-11-27 08:10:00.988464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.901 [2024-11-27 08:10:00.988471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.901 [2024-11-27 08:10:01.000910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.901 [2024-11-27 08:10:01.001356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-11-27 08:10:01.001374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:06.901 [2024-11-27 08:10:01.001382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:06.901 [2024-11-27 08:10:01.001561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:06.901 [2024-11-27 08:10:01.001744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.901 [2024-11-27 08:10:01.001753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.901 [2024-11-27 08:10:01.001759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.901 [2024-11-27 08:10:01.001766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.160 [2024-11-27 08:10:01.014069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.160 [2024-11-27 08:10:01.014442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.160 [2024-11-27 08:10:01.014459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.160 [2024-11-27 08:10:01.014467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.160 [2024-11-27 08:10:01.014646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.160 [2024-11-27 08:10:01.014826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.160 [2024-11-27 08:10:01.014834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.160 [2024-11-27 08:10:01.014842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.160 [2024-11-27 08:10:01.014849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.160 [2024-11-27 08:10:01.027248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.160 [2024-11-27 08:10:01.027655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.160 [2024-11-27 08:10:01.027672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.160 [2024-11-27 08:10:01.027680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.160 [2024-11-27 08:10:01.027854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.160 [2024-11-27 08:10:01.028033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.160 [2024-11-27 08:10:01.028042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.160 [2024-11-27 08:10:01.028052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.160 [2024-11-27 08:10:01.028058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.160 [2024-11-27 08:10:01.040262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.160 [2024-11-27 08:10:01.040690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.160 [2024-11-27 08:10:01.040707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.160 [2024-11-27 08:10:01.040714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.160 [2024-11-27 08:10:01.040888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.160 [2024-11-27 08:10:01.041069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.160 [2024-11-27 08:10:01.041078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.160 [2024-11-27 08:10:01.041084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.160 [2024-11-27 08:10:01.041090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.160 [2024-11-27 08:10:01.053102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.160 [2024-11-27 08:10:01.053468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.160 [2024-11-27 08:10:01.053512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.160 [2024-11-27 08:10:01.053535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.160 [2024-11-27 08:10:01.054018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.160 [2024-11-27 08:10:01.054194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.160 [2024-11-27 08:10:01.054202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.160 [2024-11-27 08:10:01.054208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.160 [2024-11-27 08:10:01.054214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.160 [2024-11-27 08:10:01.065984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.160 [2024-11-27 08:10:01.066370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.160 [2024-11-27 08:10:01.066413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.160 [2024-11-27 08:10:01.066435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.160 [2024-11-27 08:10:01.066859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.160 [2024-11-27 08:10:01.067041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.160 [2024-11-27 08:10:01.067050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.160 [2024-11-27 08:10:01.067057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.160 [2024-11-27 08:10:01.067063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.160 [2024-11-27 08:10:01.078853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.160 [2024-11-27 08:10:01.079245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.160 [2024-11-27 08:10:01.079262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.160 [2024-11-27 08:10:01.079269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.160 [2024-11-27 08:10:01.079432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.160 [2024-11-27 08:10:01.079620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.160 [2024-11-27 08:10:01.079628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.160 [2024-11-27 08:10:01.079634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.160 [2024-11-27 08:10:01.079640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.160 [2024-11-27 08:10:01.091718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.160 [2024-11-27 08:10:01.092170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.161 [2024-11-27 08:10:01.092187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.161 [2024-11-27 08:10:01.092194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.161 [2024-11-27 08:10:01.092357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.161 [2024-11-27 08:10:01.092521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.161 [2024-11-27 08:10:01.092529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.161 [2024-11-27 08:10:01.092534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.161 [2024-11-27 08:10:01.092540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.161 [2024-11-27 08:10:01.104595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.161 [2024-11-27 08:10:01.105064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.161 [2024-11-27 08:10:01.105108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.161 [2024-11-27 08:10:01.105130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.161 [2024-11-27 08:10:01.105712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.161 [2024-11-27 08:10:01.106121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.161 [2024-11-27 08:10:01.106130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.161 [2024-11-27 08:10:01.106136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.161 [2024-11-27 08:10:01.106142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.161 [2024-11-27 08:10:01.117474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.161 [2024-11-27 08:10:01.117916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.161 [2024-11-27 08:10:01.117933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.161 [2024-11-27 08:10:01.117943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.161 [2024-11-27 08:10:01.118121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.161 [2024-11-27 08:10:01.118294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.161 [2024-11-27 08:10:01.118302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.161 [2024-11-27 08:10:01.118308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.161 [2024-11-27 08:10:01.118314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.161 [2024-11-27 08:10:01.130326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.161 [2024-11-27 08:10:01.130707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.161 [2024-11-27 08:10:01.130724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.161 [2024-11-27 08:10:01.130731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.161 [2024-11-27 08:10:01.130903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.161 [2024-11-27 08:10:01.131083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.161 [2024-11-27 08:10:01.131092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.161 [2024-11-27 08:10:01.131098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.161 [2024-11-27 08:10:01.131104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.161 [2024-11-27 08:10:01.143152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.161 [2024-11-27 08:10:01.143516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.161 [2024-11-27 08:10:01.143560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.161 [2024-11-27 08:10:01.143583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.161 [2024-11-27 08:10:01.144026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.161 [2024-11-27 08:10:01.144199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.161 [2024-11-27 08:10:01.144207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.161 [2024-11-27 08:10:01.144213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.161 [2024-11-27 08:10:01.144219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.161 [2024-11-27 08:10:01.156077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.161 [2024-11-27 08:10:01.156453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.161 [2024-11-27 08:10:01.156470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.161 [2024-11-27 08:10:01.156477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.161 [2024-11-27 08:10:01.156649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.161 [2024-11-27 08:10:01.156828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.161 [2024-11-27 08:10:01.156836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.161 [2024-11-27 08:10:01.156842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.161 [2024-11-27 08:10:01.156849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.161 [2024-11-27 08:10:01.168903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.161 [2024-11-27 08:10:01.169286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.161 [2024-11-27 08:10:01.169302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.161 [2024-11-27 08:10:01.169309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.161 [2024-11-27 08:10:01.169481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.161 [2024-11-27 08:10:01.169655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.161 [2024-11-27 08:10:01.169663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.161 [2024-11-27 08:10:01.169670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.161 [2024-11-27 08:10:01.169676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.161 [2024-11-27 08:10:01.181823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.161 [2024-11-27 08:10:01.182138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.161 [2024-11-27 08:10:01.182155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.161 [2024-11-27 08:10:01.182162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.161 [2024-11-27 08:10:01.182335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.161 [2024-11-27 08:10:01.182508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.161 [2024-11-27 08:10:01.182516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.161 [2024-11-27 08:10:01.182522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.161 [2024-11-27 08:10:01.182529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.161 [2024-11-27 08:10:01.194751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.161 [2024-11-27 08:10:01.195157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.161 [2024-11-27 08:10:01.195174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.161 [2024-11-27 08:10:01.195181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.161 [2024-11-27 08:10:01.195353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.161 [2024-11-27 08:10:01.195526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.161 [2024-11-27 08:10:01.195534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.161 [2024-11-27 08:10:01.195544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.161 [2024-11-27 08:10:01.195550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.161 [2024-11-27 08:10:01.207711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.161 [2024-11-27 08:10:01.208205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.161 [2024-11-27 08:10:01.208250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.161 [2024-11-27 08:10:01.208272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.161 [2024-11-27 08:10:01.208853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.161 [2024-11-27 08:10:01.209039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.161 [2024-11-27 08:10:01.209048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.161 [2024-11-27 08:10:01.209054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.161 [2024-11-27 08:10:01.209060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.161 [2024-11-27 08:10:01.220658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.162 [2024-11-27 08:10:01.221088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.162 [2024-11-27 08:10:01.221105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.162 [2024-11-27 08:10:01.221112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.162 [2024-11-27 08:10:01.221285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.162 [2024-11-27 08:10:01.221458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.162 [2024-11-27 08:10:01.221466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.162 [2024-11-27 08:10:01.221472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.162 [2024-11-27 08:10:01.221479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.162 [2024-11-27 08:10:01.233557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.162 [2024-11-27 08:10:01.233984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.162 [2024-11-27 08:10:01.234000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.162 [2024-11-27 08:10:01.234007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.162 [2024-11-27 08:10:01.234187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.162 [2024-11-27 08:10:01.234351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.162 [2024-11-27 08:10:01.234359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.162 [2024-11-27 08:10:01.234364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.162 [2024-11-27 08:10:01.234370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.162 [2024-11-27 08:10:01.246483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.162 [2024-11-27 08:10:01.246911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.162 [2024-11-27 08:10:01.246927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.162 [2024-11-27 08:10:01.246934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.162 [2024-11-27 08:10:01.247112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.162 [2024-11-27 08:10:01.247286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.162 [2024-11-27 08:10:01.247294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.162 [2024-11-27 08:10:01.247300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.162 [2024-11-27 08:10:01.247306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.162 [2024-11-27 08:10:01.259360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.162 [2024-11-27 08:10:01.259726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.162 [2024-11-27 08:10:01.259742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.162 [2024-11-27 08:10:01.259749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.162 [2024-11-27 08:10:01.259921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.162 [2024-11-27 08:10:01.260102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.162 [2024-11-27 08:10:01.260111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.162 [2024-11-27 08:10:01.260117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.162 [2024-11-27 08:10:01.260123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.421 [2024-11-27 08:10:01.272305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.421 [2024-11-27 08:10:01.272785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.421 [2024-11-27 08:10:01.272836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.421 [2024-11-27 08:10:01.272860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.421 [2024-11-27 08:10:01.273315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.421 [2024-11-27 08:10:01.273496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.421 [2024-11-27 08:10:01.273505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.421 [2024-11-27 08:10:01.273511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.421 [2024-11-27 08:10:01.273518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.421 [2024-11-27 08:10:01.285477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.421 [2024-11-27 08:10:01.285939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.421 [2024-11-27 08:10:01.285982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.421 [2024-11-27 08:10:01.286015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.421 [2024-11-27 08:10:01.286546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.421 [2024-11-27 08:10:01.286724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.421 [2024-11-27 08:10:01.286732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.421 [2024-11-27 08:10:01.286740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.421 [2024-11-27 08:10:01.286746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.421 [2024-11-27 08:10:01.298482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.421 [2024-11-27 08:10:01.298870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.421 [2024-11-27 08:10:01.298915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.421 [2024-11-27 08:10:01.298937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.421 [2024-11-27 08:10:01.299419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.421 [2024-11-27 08:10:01.299593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.421 [2024-11-27 08:10:01.299601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.421 [2024-11-27 08:10:01.299608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.422 [2024-11-27 08:10:01.299615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.422 [2024-11-27 08:10:01.311477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.422 [2024-11-27 08:10:01.311926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.422 [2024-11-27 08:10:01.311943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.422 [2024-11-27 08:10:01.311957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.422 [2024-11-27 08:10:01.312130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.422 [2024-11-27 08:10:01.312303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.422 [2024-11-27 08:10:01.312312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.422 [2024-11-27 08:10:01.312318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.422 [2024-11-27 08:10:01.312324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.422 [2024-11-27 08:10:01.324513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.422 [2024-11-27 08:10:01.324893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.422 [2024-11-27 08:10:01.324909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.422 [2024-11-27 08:10:01.324916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.422 [2024-11-27 08:10:01.325096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.422 [2024-11-27 08:10:01.325273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.422 [2024-11-27 08:10:01.325281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.422 [2024-11-27 08:10:01.325287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.422 [2024-11-27 08:10:01.325293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.422 [2024-11-27 08:10:01.337339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.422 [2024-11-27 08:10:01.337761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.422 [2024-11-27 08:10:01.337778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.422 [2024-11-27 08:10:01.337785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.422 [2024-11-27 08:10:01.337962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.422 [2024-11-27 08:10:01.338136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.422 [2024-11-27 08:10:01.338144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.422 [2024-11-27 08:10:01.338150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.422 [2024-11-27 08:10:01.338156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.422 [2024-11-27 08:10:01.350199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.422 [2024-11-27 08:10:01.350640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.422 [2024-11-27 08:10:01.350684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.422 [2024-11-27 08:10:01.350706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.422 [2024-11-27 08:10:01.351302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.422 [2024-11-27 08:10:01.351735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.422 [2024-11-27 08:10:01.351743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.422 [2024-11-27 08:10:01.351750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.422 [2024-11-27 08:10:01.351756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.422 [2024-11-27 08:10:01.363037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.422 [2024-11-27 08:10:01.363460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.422 [2024-11-27 08:10:01.363477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.422 [2024-11-27 08:10:01.363484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.422 [2024-11-27 08:10:01.363647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.422 [2024-11-27 08:10:01.363810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.422 [2024-11-27 08:10:01.363818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.422 [2024-11-27 08:10:01.363824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.422 [2024-11-27 08:10:01.363833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.422 [2024-11-27 08:10:01.375891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.422 [2024-11-27 08:10:01.376347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.422 [2024-11-27 08:10:01.376402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.422 [2024-11-27 08:10:01.376424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.422 [2024-11-27 08:10:01.377018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.422 [2024-11-27 08:10:01.377564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.422 [2024-11-27 08:10:01.377572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.422 [2024-11-27 08:10:01.377579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.422 [2024-11-27 08:10:01.377585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.422 [2024-11-27 08:10:01.388812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.422 [2024-11-27 08:10:01.389249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.422 [2024-11-27 08:10:01.389265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.422 [2024-11-27 08:10:01.389272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.422 [2024-11-27 08:10:01.389444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.422 [2024-11-27 08:10:01.389618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.422 [2024-11-27 08:10:01.389625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.422 [2024-11-27 08:10:01.389632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.422 [2024-11-27 08:10:01.389638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.422 [2024-11-27 08:10:01.401685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.422 [2024-11-27 08:10:01.402112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.422 [2024-11-27 08:10:01.402129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.422 [2024-11-27 08:10:01.402136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.422 [2024-11-27 08:10:01.402309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.422 [2024-11-27 08:10:01.402482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.422 [2024-11-27 08:10:01.402490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.422 [2024-11-27 08:10:01.402496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.422 [2024-11-27 08:10:01.402502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.422 [2024-11-27 08:10:01.414680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.422 [2024-11-27 08:10:01.415109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.422 [2024-11-27 08:10:01.415125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.422 [2024-11-27 08:10:01.415132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.422 [2024-11-27 08:10:01.415305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.422 [2024-11-27 08:10:01.415478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.422 [2024-11-27 08:10:01.415486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.422 [2024-11-27 08:10:01.415493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.422 [2024-11-27 08:10:01.415498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.422 [2024-11-27 08:10:01.427543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.422 [2024-11-27 08:10:01.427969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.422 [2024-11-27 08:10:01.427985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.422 [2024-11-27 08:10:01.427992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.422 [2024-11-27 08:10:01.428155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.423 [2024-11-27 08:10:01.428319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.423 [2024-11-27 08:10:01.428327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.423 [2024-11-27 08:10:01.428333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.423 [2024-11-27 08:10:01.428339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.423 [2024-11-27 08:10:01.440359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.423 [2024-11-27 08:10:01.440786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.423 [2024-11-27 08:10:01.440828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.423 [2024-11-27 08:10:01.440851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.423 [2024-11-27 08:10:01.441449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.423 [2024-11-27 08:10:01.441798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.423 [2024-11-27 08:10:01.441806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.423 [2024-11-27 08:10:01.441812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.423 [2024-11-27 08:10:01.441818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.423 [2024-11-27 08:10:01.453233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.423 [2024-11-27 08:10:01.453666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.423 [2024-11-27 08:10:01.453682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.423 [2024-11-27 08:10:01.453693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.423 [2024-11-27 08:10:01.453866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.423 [2024-11-27 08:10:01.454072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.423 [2024-11-27 08:10:01.454081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.423 [2024-11-27 08:10:01.454087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.423 [2024-11-27 08:10:01.454093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.423 [2024-11-27 08:10:01.466135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.423 [2024-11-27 08:10:01.466538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.423 [2024-11-27 08:10:01.466554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.423 [2024-11-27 08:10:01.466560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.423 [2024-11-27 08:10:01.466724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.423 [2024-11-27 08:10:01.466886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.423 [2024-11-27 08:10:01.466894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.423 [2024-11-27 08:10:01.466900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.423 [2024-11-27 08:10:01.466906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.423 [2024-11-27 08:10:01.478954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.423 [2024-11-27 08:10:01.479392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.423 [2024-11-27 08:10:01.479434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.423 [2024-11-27 08:10:01.479456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.423 [2024-11-27 08:10:01.479886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.423 [2024-11-27 08:10:01.480065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.423 [2024-11-27 08:10:01.480073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.423 [2024-11-27 08:10:01.480079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.423 [2024-11-27 08:10:01.480085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.423 [2024-11-27 08:10:01.491816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.423 [2024-11-27 08:10:01.492176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.423 [2024-11-27 08:10:01.492191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.423 [2024-11-27 08:10:01.492198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.423 [2024-11-27 08:10:01.492371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.423 [2024-11-27 08:10:01.492543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.423 [2024-11-27 08:10:01.492553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.423 [2024-11-27 08:10:01.492560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.423 [2024-11-27 08:10:01.492566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.423 [2024-11-27 08:10:01.504766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.423 [2024-11-27 08:10:01.505191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.423 [2024-11-27 08:10:01.505208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.423 [2024-11-27 08:10:01.505215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.423 [2024-11-27 08:10:01.505387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.423 [2024-11-27 08:10:01.505559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.423 [2024-11-27 08:10:01.505567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.423 [2024-11-27 08:10:01.505574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.423 [2024-11-27 08:10:01.505580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.423 [2024-11-27 08:10:01.517837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.423 [2024-11-27 08:10:01.518286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.423 [2024-11-27 08:10:01.518304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.423 [2024-11-27 08:10:01.518311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.423 [2024-11-27 08:10:01.518484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.423 [2024-11-27 08:10:01.518657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.423 [2024-11-27 08:10:01.518665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.423 [2024-11-27 08:10:01.518671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.423 [2024-11-27 08:10:01.518677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.684 [2024-11-27 08:10:01.530911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.684 [2024-11-27 08:10:01.531380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.684 [2024-11-27 08:10:01.531426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.684 [2024-11-27 08:10:01.531450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.684 [2024-11-27 08:10:01.531919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.684 [2024-11-27 08:10:01.532100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.684 [2024-11-27 08:10:01.532109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.684 [2024-11-27 08:10:01.532116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.684 [2024-11-27 08:10:01.532126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.684 [2024-11-27 08:10:01.543978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.684 [2024-11-27 08:10:01.544458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.684 [2024-11-27 08:10:01.544502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.684 [2024-11-27 08:10:01.544524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.684 [2024-11-27 08:10:01.545067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.684 [2024-11-27 08:10:01.545241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.684 [2024-11-27 08:10:01.545249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.684 [2024-11-27 08:10:01.545256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.684 [2024-11-27 08:10:01.545262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.684 [2024-11-27 08:10:01.557098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.684 [2024-11-27 08:10:01.557511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.684 [2024-11-27 08:10:01.557555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.684 [2024-11-27 08:10:01.557577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.684 [2024-11-27 08:10:01.558011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.684 [2024-11-27 08:10:01.558186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.685 [2024-11-27 08:10:01.558195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.685 [2024-11-27 08:10:01.558201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.685 [2024-11-27 08:10:01.558207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.685 [2024-11-27 08:10:01.569942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.685 [2024-11-27 08:10:01.570321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.685 [2024-11-27 08:10:01.570336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.685 [2024-11-27 08:10:01.570343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.685 [2024-11-27 08:10:01.570506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.685 [2024-11-27 08:10:01.570669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.685 [2024-11-27 08:10:01.570677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.685 [2024-11-27 08:10:01.570683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.685 [2024-11-27 08:10:01.570688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.685 [2024-11-27 08:10:01.582801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.685 [2024-11-27 08:10:01.583204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.685 [2024-11-27 08:10:01.583221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.685 [2024-11-27 08:10:01.583228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.685 [2024-11-27 08:10:01.583400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.685 [2024-11-27 08:10:01.583574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.685 [2024-11-27 08:10:01.583582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.685 [2024-11-27 08:10:01.583588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.685 [2024-11-27 08:10:01.583594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.685 [2024-11-27 08:10:01.595633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.685 [2024-11-27 08:10:01.596063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.685 [2024-11-27 08:10:01.596108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.685 [2024-11-27 08:10:01.596131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.685 [2024-11-27 08:10:01.596705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.685 [2024-11-27 08:10:01.596968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.685 [2024-11-27 08:10:01.596980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.685 [2024-11-27 08:10:01.596990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.685 [2024-11-27 08:10:01.596998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.685 [2024-11-27 08:10:01.609183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.685 [2024-11-27 08:10:01.609589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.685 [2024-11-27 08:10:01.609605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.685 [2024-11-27 08:10:01.609612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.685 [2024-11-27 08:10:01.609780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.685 [2024-11-27 08:10:01.609955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.685 [2024-11-27 08:10:01.609964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.685 [2024-11-27 08:10:01.609969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.685 [2024-11-27 08:10:01.609993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.685 [2024-11-27 08:10:01.622121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.685 [2024-11-27 08:10:01.622520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.685 [2024-11-27 08:10:01.622536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.685 [2024-11-27 08:10:01.622543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.685 [2024-11-27 08:10:01.622710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.685 [2024-11-27 08:10:01.622873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.685 [2024-11-27 08:10:01.622881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.685 [2024-11-27 08:10:01.622887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.685 [2024-11-27 08:10:01.622893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.685 [2024-11-27 08:10:01.634980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.685 [2024-11-27 08:10:01.635435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.685 [2024-11-27 08:10:01.635480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.685 [2024-11-27 08:10:01.635503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.685 [2024-11-27 08:10:01.636000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.685 [2024-11-27 08:10:01.636174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.685 [2024-11-27 08:10:01.636183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.685 [2024-11-27 08:10:01.636189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.685 [2024-11-27 08:10:01.636195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.685 [2024-11-27 08:10:01.647938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.685 [2024-11-27 08:10:01.648390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.685 [2024-11-27 08:10:01.648433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.685 [2024-11-27 08:10:01.648455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.685 [2024-11-27 08:10:01.649052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.685 [2024-11-27 08:10:01.649484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.685 [2024-11-27 08:10:01.649492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.685 [2024-11-27 08:10:01.649498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.685 [2024-11-27 08:10:01.649504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.685 [2024-11-27 08:10:01.660743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.685 [2024-11-27 08:10:01.661117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.685 [2024-11-27 08:10:01.661134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.685 [2024-11-27 08:10:01.661142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.685 [2024-11-27 08:10:01.661315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.685 [2024-11-27 08:10:01.661488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.685 [2024-11-27 08:10:01.661500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.685 [2024-11-27 08:10:01.661507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.685 [2024-11-27 08:10:01.661513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.685 [2024-11-27 08:10:01.673574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.685 [2024-11-27 08:10:01.674028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.685 [2024-11-27 08:10:01.674045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.685 [2024-11-27 08:10:01.674052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.685 [2024-11-27 08:10:01.674225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.685 [2024-11-27 08:10:01.674398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.685 [2024-11-27 08:10:01.674406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.685 [2024-11-27 08:10:01.674413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.685 [2024-11-27 08:10:01.674419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.685 [2024-11-27 08:10:01.686421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.685 [2024-11-27 08:10:01.686857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.685 [2024-11-27 08:10:01.686900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.685 [2024-11-27 08:10:01.686923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.686 [2024-11-27 08:10:01.687424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.686 [2024-11-27 08:10:01.687597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.686 [2024-11-27 08:10:01.687605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.686 [2024-11-27 08:10:01.687612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.686 [2024-11-27 08:10:01.687618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.686 [2024-11-27 08:10:01.699222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.686 [2024-11-27 08:10:01.699666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.686 [2024-11-27 08:10:01.699682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.686 [2024-11-27 08:10:01.699689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.686 [2024-11-27 08:10:01.699863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.686 [2024-11-27 08:10:01.700042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.686 [2024-11-27 08:10:01.700051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.686 [2024-11-27 08:10:01.700058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.686 [2024-11-27 08:10:01.700067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.686 [2024-11-27 08:10:01.712344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.686 [2024-11-27 08:10:01.712772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.686 [2024-11-27 08:10:01.712789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.686 [2024-11-27 08:10:01.712796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.686 [2024-11-27 08:10:01.712966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.686 [2024-11-27 08:10:01.713154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.686 [2024-11-27 08:10:01.713162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.686 [2024-11-27 08:10:01.713169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.686 [2024-11-27 08:10:01.713175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.686 [2024-11-27 08:10:01.725278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.686 [2024-11-27 08:10:01.725705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.686 [2024-11-27 08:10:01.725723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.686 [2024-11-27 08:10:01.725730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.686 [2024-11-27 08:10:01.725903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.686 [2024-11-27 08:10:01.726084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.686 [2024-11-27 08:10:01.726093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.686 [2024-11-27 08:10:01.726099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.686 [2024-11-27 08:10:01.726105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.686 [2024-11-27 08:10:01.738150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.686 [2024-11-27 08:10:01.738591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.686 [2024-11-27 08:10:01.738634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.686 [2024-11-27 08:10:01.738656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.686 [2024-11-27 08:10:01.739252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.686 [2024-11-27 08:10:01.739661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.686 [2024-11-27 08:10:01.739669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.686 [2024-11-27 08:10:01.739675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.686 [2024-11-27 08:10:01.739681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.686 [2024-11-27 08:10:01.751031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.686 [2024-11-27 08:10:01.751435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.686 [2024-11-27 08:10:01.751454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.686 [2024-11-27 08:10:01.751460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.686 [2024-11-27 08:10:01.751624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.686 [2024-11-27 08:10:01.751788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.686 [2024-11-27 08:10:01.751796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.686 [2024-11-27 08:10:01.751802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.686 [2024-11-27 08:10:01.751807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.686 [2024-11-27 08:10:01.763871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.686 [2024-11-27 08:10:01.764308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.686 [2024-11-27 08:10:01.764325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.686 [2024-11-27 08:10:01.764332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.686 [2024-11-27 08:10:01.764504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.686 [2024-11-27 08:10:01.764679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.686 [2024-11-27 08:10:01.764687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.686 [2024-11-27 08:10:01.764693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.686 [2024-11-27 08:10:01.764699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.686 [2024-11-27 08:10:01.776749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.686 [2024-11-27 08:10:01.777174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.686 [2024-11-27 08:10:01.777220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.686 [2024-11-27 08:10:01.777242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.686 [2024-11-27 08:10:01.777825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.686 [2024-11-27 08:10:01.778426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.686 [2024-11-27 08:10:01.778452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.686 [2024-11-27 08:10:01.778471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.686 [2024-11-27 08:10:01.778477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.686 [2024-11-27 08:10:01.790397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.686 [2024-11-27 08:10:01.790847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.686 [2024-11-27 08:10:01.790865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.686 [2024-11-27 08:10:01.790889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.686 [2024-11-27 08:10:01.791080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.686 [2024-11-27 08:10:01.791264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.686 [2024-11-27 08:10:01.791274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.686 [2024-11-27 08:10:01.791281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.686 [2024-11-27 08:10:01.791288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.949 [2024-11-27 08:10:01.803607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.949 [2024-11-27 08:10:01.804068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-11-27 08:10:01.804085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.949 [2024-11-27 08:10:01.804092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.949 [2024-11-27 08:10:01.804265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.949 [2024-11-27 08:10:01.804439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.949 [2024-11-27 08:10:01.804447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.949 [2024-11-27 08:10:01.804454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.949 [2024-11-27 08:10:01.804460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.949 [2024-11-27 08:10:01.816451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.949 [2024-11-27 08:10:01.816801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-11-27 08:10:01.816817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.949 [2024-11-27 08:10:01.816824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.949 [2024-11-27 08:10:01.817011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.949 [2024-11-27 08:10:01.817184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.949 [2024-11-27 08:10:01.817192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.949 [2024-11-27 08:10:01.817198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.949 [2024-11-27 08:10:01.817205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.949 5764.80 IOPS, 22.52 MiB/s [2024-11-27T07:10:02.058Z] [2024-11-27 08:10:01.829374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.949 [2024-11-27 08:10:01.829832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-11-27 08:10:01.829877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.949 [2024-11-27 08:10:01.829900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.949 [2024-11-27 08:10:01.830501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.949 [2024-11-27 08:10:01.831094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.949 [2024-11-27 08:10:01.831106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.949 [2024-11-27 08:10:01.831112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.949 [2024-11-27 08:10:01.831119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.949 [2024-11-27 08:10:01.842244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.949 [2024-11-27 08:10:01.842648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-11-27 08:10:01.842664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.949 [2024-11-27 08:10:01.842670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.949 [2024-11-27 08:10:01.842833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.949 [2024-11-27 08:10:01.843020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.949 [2024-11-27 08:10:01.843028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.949 [2024-11-27 08:10:01.843035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.949 [2024-11-27 08:10:01.843041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.949 [2024-11-27 08:10:01.855079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.949 [2024-11-27 08:10:01.855541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-11-27 08:10:01.855585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.949 [2024-11-27 08:10:01.855607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.949 [2024-11-27 08:10:01.856206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.949 [2024-11-27 08:10:01.856723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.949 [2024-11-27 08:10:01.856731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.949 [2024-11-27 08:10:01.856737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.949 [2024-11-27 08:10:01.856743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.949 [2024-11-27 08:10:01.867876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.949 [2024-11-27 08:10:01.868336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-11-27 08:10:01.868380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.949 [2024-11-27 08:10:01.868403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.949 [2024-11-27 08:10:01.868816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.949 [2024-11-27 08:10:01.868994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.949 [2024-11-27 08:10:01.869002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.949 [2024-11-27 08:10:01.869009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.949 [2024-11-27 08:10:01.869021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.949 [2024-11-27 08:10:01.880785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.949 [2024-11-27 08:10:01.881223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-11-27 08:10:01.881256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.949 [2024-11-27 08:10:01.881280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.949 [2024-11-27 08:10:01.881820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.949 [2024-11-27 08:10:01.881999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.949 [2024-11-27 08:10:01.882007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.949 [2024-11-27 08:10:01.882014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.949 [2024-11-27 08:10:01.882020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.949 [2024-11-27 08:10:01.893724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.950 [2024-11-27 08:10:01.894150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-11-27 08:10:01.894167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.950 [2024-11-27 08:10:01.894174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.950 [2024-11-27 08:10:01.894338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.950 [2024-11-27 08:10:01.894502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.950 [2024-11-27 08:10:01.894510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.950 [2024-11-27 08:10:01.894515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.950 [2024-11-27 08:10:01.894521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.950 [2024-11-27 08:10:01.906552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.950 [2024-11-27 08:10:01.906906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-11-27 08:10:01.906962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.950 [2024-11-27 08:10:01.906987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.950 [2024-11-27 08:10:01.907570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.950 [2024-11-27 08:10:01.908067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.950 [2024-11-27 08:10:01.908075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.950 [2024-11-27 08:10:01.908081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.950 [2024-11-27 08:10:01.908087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.950 [2024-11-27 08:10:01.919436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.950 [2024-11-27 08:10:01.919875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-11-27 08:10:01.919927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.950 [2024-11-27 08:10:01.919967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.950 [2024-11-27 08:10:01.920494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.950 [2024-11-27 08:10:01.920748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.950 [2024-11-27 08:10:01.920760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.950 [2024-11-27 08:10:01.920769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.950 [2024-11-27 08:10:01.920778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.950 [2024-11-27 08:10:01.933094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.950 [2024-11-27 08:10:01.933554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-11-27 08:10:01.933603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.950 [2024-11-27 08:10:01.933626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.950 [2024-11-27 08:10:01.934159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.950 [2024-11-27 08:10:01.934332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.950 [2024-11-27 08:10:01.934340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.950 [2024-11-27 08:10:01.934347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.950 [2024-11-27 08:10:01.934353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.950 [2024-11-27 08:10:01.945941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.950 [2024-11-27 08:10:01.946343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-11-27 08:10:01.946359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.950 [2024-11-27 08:10:01.946366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.950 [2024-11-27 08:10:01.946529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.950 [2024-11-27 08:10:01.946693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.950 [2024-11-27 08:10:01.946701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.950 [2024-11-27 08:10:01.946706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.950 [2024-11-27 08:10:01.946712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.950 [2024-11-27 08:10:01.958767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.950 [2024-11-27 08:10:01.959215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-11-27 08:10:01.959259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.950 [2024-11-27 08:10:01.959281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.950 [2024-11-27 08:10:01.959781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.950 [2024-11-27 08:10:01.959960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.950 [2024-11-27 08:10:01.959969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.950 [2024-11-27 08:10:01.959975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.950 [2024-11-27 08:10:01.959981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.950 [2024-11-27 08:10:01.971585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.950 [2024-11-27 08:10:01.972018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-11-27 08:10:01.972063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.950 [2024-11-27 08:10:01.972086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.950 [2024-11-27 08:10:01.972668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.950 [2024-11-27 08:10:01.972872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.950 [2024-11-27 08:10:01.972879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.950 [2024-11-27 08:10:01.972885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.950 [2024-11-27 08:10:01.972891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.950 [2024-11-27 08:10:01.984446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.950 [2024-11-27 08:10:01.984769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-11-27 08:10:01.984786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.950 [2024-11-27 08:10:01.984793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.950 [2024-11-27 08:10:01.984970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.950 [2024-11-27 08:10:01.985144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.950 [2024-11-27 08:10:01.985153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.950 [2024-11-27 08:10:01.985159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.950 [2024-11-27 08:10:01.985165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.950 [2024-11-27 08:10:01.997465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.950 [2024-11-27 08:10:01.997880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-11-27 08:10:01.997897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.950 [2024-11-27 08:10:01.997905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.950 [2024-11-27 08:10:01.998085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.950 [2024-11-27 08:10:01.998259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.950 [2024-11-27 08:10:01.998271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.950 [2024-11-27 08:10:01.998278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.950 [2024-11-27 08:10:01.998284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.950 [2024-11-27 08:10:02.010444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.950 [2024-11-27 08:10:02.010774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-11-27 08:10:02.010819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.950 [2024-11-27 08:10:02.010842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.950 [2024-11-27 08:10:02.011454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.950 [2024-11-27 08:10:02.012050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.950 [2024-11-27 08:10:02.012086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.950 [2024-11-27 08:10:02.012094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.950 [2024-11-27 08:10:02.012100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.951 [2024-11-27 08:10:02.023369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.951 [2024-11-27 08:10:02.023748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-11-27 08:10:02.023765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.951 [2024-11-27 08:10:02.023773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.951 [2024-11-27 08:10:02.023946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.951 [2024-11-27 08:10:02.024125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.951 [2024-11-27 08:10:02.024134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.951 [2024-11-27 08:10:02.024140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.951 [2024-11-27 08:10:02.024146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.951 [2024-11-27 08:10:02.036226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.951 [2024-11-27 08:10:02.036663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-11-27 08:10:02.036707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.951 [2024-11-27 08:10:02.036730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.951 [2024-11-27 08:10:02.037168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.951 [2024-11-27 08:10:02.037343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.951 [2024-11-27 08:10:02.037351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.951 [2024-11-27 08:10:02.037358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.951 [2024-11-27 08:10:02.037364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.951 [2024-11-27 08:10:02.049144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.951 [2024-11-27 08:10:02.049616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-11-27 08:10:02.049633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:07.951 [2024-11-27 08:10:02.049640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:07.951 [2024-11-27 08:10:02.049821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:07.951 [2024-11-27 08:10:02.050004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.951 [2024-11-27 08:10:02.050014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.951 [2024-11-27 08:10:02.050022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.951 [2024-11-27 08:10:02.050032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.211 [2024-11-27 08:10:02.062305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.211 [2024-11-27 08:10:02.062738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-27 08:10:02.062784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.211 [2024-11-27 08:10:02.062807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.211 [2024-11-27 08:10:02.063311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.211 [2024-11-27 08:10:02.063491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.211 [2024-11-27 08:10:02.063499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.211 [2024-11-27 08:10:02.063505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.211 [2024-11-27 08:10:02.063512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.211 [2024-11-27 08:10:02.075186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.211 [2024-11-27 08:10:02.075663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-27 08:10:02.075708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.211 [2024-11-27 08:10:02.075730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.211 [2024-11-27 08:10:02.076235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.211 [2024-11-27 08:10:02.076409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.211 [2024-11-27 08:10:02.076417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.211 [2024-11-27 08:10:02.076424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.211 [2024-11-27 08:10:02.076430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.211 [2024-11-27 08:10:02.088008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.211 [2024-11-27 08:10:02.088469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-27 08:10:02.088488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.211 [2024-11-27 08:10:02.088495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.211 [2024-11-27 08:10:02.088658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.211 [2024-11-27 08:10:02.088822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.211 [2024-11-27 08:10:02.088830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.211 [2024-11-27 08:10:02.088836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.211 [2024-11-27 08:10:02.088842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.211 [2024-11-27 08:10:02.100853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.211 [2024-11-27 08:10:02.101299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-27 08:10:02.101343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.211 [2024-11-27 08:10:02.101366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.211 [2024-11-27 08:10:02.101962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.211 [2024-11-27 08:10:02.102382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.211 [2024-11-27 08:10:02.102390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.211 [2024-11-27 08:10:02.102396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.211 [2024-11-27 08:10:02.102402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.211 [2024-11-27 08:10:02.114602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.211 [2024-11-27 08:10:02.115070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-27 08:10:02.115088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.211 [2024-11-27 08:10:02.115095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.211 [2024-11-27 08:10:02.115269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.211 [2024-11-27 08:10:02.115442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.211 [2024-11-27 08:10:02.115450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.211 [2024-11-27 08:10:02.115457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.211 [2024-11-27 08:10:02.115463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.211 [2024-11-27 08:10:02.127481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.211 [2024-11-27 08:10:02.127879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-27 08:10:02.127924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.211 [2024-11-27 08:10:02.127965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.211 [2024-11-27 08:10:02.128428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.211 [2024-11-27 08:10:02.128602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.211 [2024-11-27 08:10:02.128610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.212 [2024-11-27 08:10:02.128617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.212 [2024-11-27 08:10:02.128623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.212 [2024-11-27 08:10:02.140370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.212 [2024-11-27 08:10:02.140680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-27 08:10:02.140696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.212 [2024-11-27 08:10:02.140703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.212 [2024-11-27 08:10:02.140867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.212 [2024-11-27 08:10:02.141035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.212 [2024-11-27 08:10:02.141044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.212 [2024-11-27 08:10:02.141050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.212 [2024-11-27 08:10:02.141056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.212 [2024-11-27 08:10:02.153170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.212 [2024-11-27 08:10:02.153605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-27 08:10:02.153649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.212 [2024-11-27 08:10:02.153672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.212 [2024-11-27 08:10:02.154269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.212 [2024-11-27 08:10:02.154450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.212 [2024-11-27 08:10:02.154458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.212 [2024-11-27 08:10:02.154464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.212 [2024-11-27 08:10:02.154470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.212 [2024-11-27 08:10:02.166108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.212 [2024-11-27 08:10:02.166449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-27 08:10:02.166465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.212 [2024-11-27 08:10:02.166472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.212 [2024-11-27 08:10:02.166634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.212 [2024-11-27 08:10:02.166797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.212 [2024-11-27 08:10:02.166805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.212 [2024-11-27 08:10:02.166814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.212 [2024-11-27 08:10:02.166820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.212 [2024-11-27 08:10:02.179041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.212 [2024-11-27 08:10:02.179426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-27 08:10:02.179442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.212 [2024-11-27 08:10:02.179449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.212 [2024-11-27 08:10:02.179622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.212 [2024-11-27 08:10:02.179795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.212 [2024-11-27 08:10:02.179803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.212 [2024-11-27 08:10:02.179809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.212 [2024-11-27 08:10:02.179815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.212 [2024-11-27 08:10:02.191924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.212 [2024-11-27 08:10:02.192373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-27 08:10:02.192390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.212 [2024-11-27 08:10:02.192397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.212 [2024-11-27 08:10:02.192570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.212 [2024-11-27 08:10:02.192743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.212 [2024-11-27 08:10:02.192752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.212 [2024-11-27 08:10:02.192758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.212 [2024-11-27 08:10:02.192764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.212 [2024-11-27 08:10:02.204818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.212 [2024-11-27 08:10:02.205278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-27 08:10:02.205321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.212 [2024-11-27 08:10:02.205344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.212 [2024-11-27 08:10:02.205832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.212 [2024-11-27 08:10:02.206011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.212 [2024-11-27 08:10:02.206020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.212 [2024-11-27 08:10:02.206026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.212 [2024-11-27 08:10:02.206032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.212 [2024-11-27 08:10:02.217762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.212 [2024-11-27 08:10:02.218203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-27 08:10:02.218220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.212 [2024-11-27 08:10:02.218227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.212 [2024-11-27 08:10:02.218400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.212 [2024-11-27 08:10:02.218574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.212 [2024-11-27 08:10:02.218582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.212 [2024-11-27 08:10:02.218588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.212 [2024-11-27 08:10:02.218594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.212 [2024-11-27 08:10:02.230683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.212 [2024-11-27 08:10:02.231018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-27 08:10:02.231034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.212 [2024-11-27 08:10:02.231041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.212 [2024-11-27 08:10:02.231203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.212 [2024-11-27 08:10:02.231368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.212 [2024-11-27 08:10:02.231375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.212 [2024-11-27 08:10:02.231381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.212 [2024-11-27 08:10:02.231387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.212 [2024-11-27 08:10:02.243578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.212 [2024-11-27 08:10:02.243926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-27 08:10:02.243942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.212 [2024-11-27 08:10:02.243954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.212 [2024-11-27 08:10:02.244142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.212 [2024-11-27 08:10:02.244314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.212 [2024-11-27 08:10:02.244322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.212 [2024-11-27 08:10:02.244329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.212 [2024-11-27 08:10:02.244335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.212 [2024-11-27 08:10:02.256493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.212 [2024-11-27 08:10:02.256920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-27 08:10:02.256936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.212 [2024-11-27 08:10:02.256946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.212 [2024-11-27 08:10:02.257139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.212 [2024-11-27 08:10:02.257313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.213 [2024-11-27 08:10:02.257321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.213 [2024-11-27 08:10:02.257327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.213 [2024-11-27 08:10:02.257333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.213 [2024-11-27 08:10:02.269383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.213 [2024-11-27 08:10:02.269715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-27 08:10:02.269731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.213 [2024-11-27 08:10:02.269738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.213 [2024-11-27 08:10:02.269901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.213 [2024-11-27 08:10:02.270092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.213 [2024-11-27 08:10:02.270101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.213 [2024-11-27 08:10:02.270107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.213 [2024-11-27 08:10:02.270113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.213 [2024-11-27 08:10:02.282310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.213 [2024-11-27 08:10:02.282780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-27 08:10:02.282824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.213 [2024-11-27 08:10:02.282847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.213 [2024-11-27 08:10:02.283443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.213 [2024-11-27 08:10:02.283919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.213 [2024-11-27 08:10:02.283927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.213 [2024-11-27 08:10:02.283933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.213 [2024-11-27 08:10:02.283940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.213 [2024-11-27 08:10:02.295215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.213 [2024-11-27 08:10:02.295585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-27 08:10:02.295601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.213 [2024-11-27 08:10:02.295607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.213 [2024-11-27 08:10:02.295771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.213 [2024-11-27 08:10:02.295939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.213 [2024-11-27 08:10:02.295952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.213 [2024-11-27 08:10:02.295959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.213 [2024-11-27 08:10:02.295965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.213 [2024-11-27 08:10:02.308075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.213 [2024-11-27 08:10:02.308485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-27 08:10:02.308501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.213 [2024-11-27 08:10:02.308508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.213 [2024-11-27 08:10:02.308680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.213 [2024-11-27 08:10:02.308854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.213 [2024-11-27 08:10:02.308862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.213 [2024-11-27 08:10:02.308868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.213 [2024-11-27 08:10:02.308875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.474 [2024-11-27 08:10:02.321138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.475 [2024-11-27 08:10:02.321586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.475 [2024-11-27 08:10:02.321629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.475 [2024-11-27 08:10:02.321651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.475 [2024-11-27 08:10:02.322082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.475 [2024-11-27 08:10:02.322257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.475 [2024-11-27 08:10:02.322265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.475 [2024-11-27 08:10:02.322271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.475 [2024-11-27 08:10:02.322277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.475 [2024-11-27 08:10:02.334154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.475 [2024-11-27 08:10:02.334568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.475 [2024-11-27 08:10:02.334585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.475 [2024-11-27 08:10:02.334592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.475 [2024-11-27 08:10:02.334765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.475 [2024-11-27 08:10:02.334939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.475 [2024-11-27 08:10:02.334953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.475 [2024-11-27 08:10:02.334963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.475 [2024-11-27 08:10:02.334970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.475 [2024-11-27 08:10:02.347033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.475 [2024-11-27 08:10:02.347341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.475 [2024-11-27 08:10:02.347358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.475 [2024-11-27 08:10:02.347365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.475 [2024-11-27 08:10:02.347538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.475 [2024-11-27 08:10:02.347713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.475 [2024-11-27 08:10:02.347721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.475 [2024-11-27 08:10:02.347727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.475 [2024-11-27 08:10:02.347733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.475 [2024-11-27 08:10:02.359952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.475 [2024-11-27 08:10:02.360245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.475 [2024-11-27 08:10:02.360261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.475 [2024-11-27 08:10:02.360268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.475 [2024-11-27 08:10:02.360441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.475 [2024-11-27 08:10:02.360614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.475 [2024-11-27 08:10:02.360622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.475 [2024-11-27 08:10:02.360629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.475 [2024-11-27 08:10:02.360635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.475 [2024-11-27 08:10:02.372960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.475 [2024-11-27 08:10:02.373364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.475 [2024-11-27 08:10:02.373381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.475 [2024-11-27 08:10:02.373388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.475 [2024-11-27 08:10:02.373561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.475 [2024-11-27 08:10:02.373736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.475 [2024-11-27 08:10:02.373745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.475 [2024-11-27 08:10:02.373751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.475 [2024-11-27 08:10:02.373757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.475 [2024-11-27 08:10:02.385770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.475 [2024-11-27 08:10:02.386126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.475 [2024-11-27 08:10:02.386142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.475 [2024-11-27 08:10:02.386150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.475 [2024-11-27 08:10:02.386324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.475 [2024-11-27 08:10:02.386497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.475 [2024-11-27 08:10:02.386505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.475 [2024-11-27 08:10:02.386512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.475 [2024-11-27 08:10:02.386518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.475 [2024-11-27 08:10:02.398980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.475 [2024-11-27 08:10:02.399335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.475 [2024-11-27 08:10:02.399351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.475 [2024-11-27 08:10:02.399358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.475 [2024-11-27 08:10:02.399536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.475 [2024-11-27 08:10:02.399715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.475 [2024-11-27 08:10:02.399723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.475 [2024-11-27 08:10:02.399729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.475 [2024-11-27 08:10:02.399736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.475 [2024-11-27 08:10:02.411856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.475 [2024-11-27 08:10:02.412239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.475 [2024-11-27 08:10:02.412283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.475 [2024-11-27 08:10:02.412305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.475 [2024-11-27 08:10:02.412868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.475 [2024-11-27 08:10:02.413047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.475 [2024-11-27 08:10:02.413056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.475 [2024-11-27 08:10:02.413062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.475 [2024-11-27 08:10:02.413068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.475 [2024-11-27 08:10:02.424789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.475 [2024-11-27 08:10:02.425174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.475 [2024-11-27 08:10:02.425208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.475 [2024-11-27 08:10:02.425242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.475 [2024-11-27 08:10:02.425826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.475 [2024-11-27 08:10:02.426121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.475 [2024-11-27 08:10:02.426134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.475 [2024-11-27 08:10:02.426143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.475 [2024-11-27 08:10:02.426152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.475 [2024-11-27 08:10:02.438294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.475 [2024-11-27 08:10:02.438723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.476 [2024-11-27 08:10:02.438740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.476 [2024-11-27 08:10:02.438747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.476 [2024-11-27 08:10:02.438920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.476 [2024-11-27 08:10:02.439097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.476 [2024-11-27 08:10:02.439106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.476 [2024-11-27 08:10:02.439112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.476 [2024-11-27 08:10:02.439118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.476 [2024-11-27 08:10:02.451158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.476 [2024-11-27 08:10:02.451583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.476 [2024-11-27 08:10:02.451599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.476 [2024-11-27 08:10:02.451606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.476 [2024-11-27 08:10:02.451779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.476 [2024-11-27 08:10:02.451959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.476 [2024-11-27 08:10:02.451968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.476 [2024-11-27 08:10:02.451974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.476 [2024-11-27 08:10:02.451980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.476 [2024-11-27 08:10:02.464040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.476 [2024-11-27 08:10:02.464357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.476 [2024-11-27 08:10:02.464374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.476 [2024-11-27 08:10:02.464381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.476 [2024-11-27 08:10:02.464553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.476 [2024-11-27 08:10:02.464730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.476 [2024-11-27 08:10:02.464739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.476 [2024-11-27 08:10:02.464745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.476 [2024-11-27 08:10:02.464751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2601223 Killed "${NVMF_APP[@]}" "$@" 00:27:08.476 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:08.476 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:08.476 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:08.476 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:08.476 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.476 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2602487 00:27:08.476 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2602487 00:27:08.476 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:08.476 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2602487 ']' 00:27:08.476 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.476 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:08.476 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.476 [2024-11-27 08:10:02.477155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.476 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:08.476 [2024-11-27 08:10:02.477496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.476 [2024-11-27 08:10:02.477513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.476 [2024-11-27 08:10:02.477522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.476 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.476 [2024-11-27 08:10:02.477700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.476 [2024-11-27 08:10:02.477880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.476 [2024-11-27 08:10:02.477889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.476 [2024-11-27 08:10:02.477896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.476 [2024-11-27 08:10:02.477903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.476 [2024-11-27 08:10:02.490338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.476 [2024-11-27 08:10:02.490761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.476 [2024-11-27 08:10:02.490778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.476 [2024-11-27 08:10:02.490785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.476 [2024-11-27 08:10:02.490975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.476 [2024-11-27 08:10:02.491155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.476 [2024-11-27 08:10:02.491163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.476 [2024-11-27 08:10:02.491170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.476 [2024-11-27 08:10:02.491176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.476 [2024-11-27 08:10:02.503395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.476 [2024-11-27 08:10:02.503693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.476 [2024-11-27 08:10:02.503708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.476 [2024-11-27 08:10:02.503716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.476 [2024-11-27 08:10:02.503893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.476 [2024-11-27 08:10:02.504077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.476 [2024-11-27 08:10:02.504086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.476 [2024-11-27 08:10:02.504093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.476 [2024-11-27 08:10:02.504099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.476 [2024-11-27 08:10:02.516507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.476 [2024-11-27 08:10:02.516803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.476 [2024-11-27 08:10:02.516820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.476 [2024-11-27 08:10:02.516827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.476 [2024-11-27 08:10:02.517011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.476 [2024-11-27 08:10:02.517190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.476 [2024-11-27 08:10:02.517199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.476 [2024-11-27 08:10:02.517206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.476 [2024-11-27 08:10:02.517212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.476 [2024-11-27 08:10:02.528969] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:27:08.476 [2024-11-27 08:10:02.529008] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.476 [2024-11-27 08:10:02.529530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.476 [2024-11-27 08:10:02.529875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.476 [2024-11-27 08:10:02.529891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.476 [2024-11-27 08:10:02.529899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.476 [2024-11-27 08:10:02.530085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.476 [2024-11-27 08:10:02.530264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.476 [2024-11-27 08:10:02.530273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.477 [2024-11-27 08:10:02.530279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.477 [2024-11-27 08:10:02.530286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.477 [2024-11-27 08:10:02.542667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.477 [2024-11-27 08:10:02.543019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.477 [2024-11-27 08:10:02.543037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.477 [2024-11-27 08:10:02.543045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.477 [2024-11-27 08:10:02.543224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.477 [2024-11-27 08:10:02.543403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.477 [2024-11-27 08:10:02.543411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.477 [2024-11-27 08:10:02.543418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.477 [2024-11-27 08:10:02.543425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.477 [2024-11-27 08:10:02.555641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.477 [2024-11-27 08:10:02.556080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.477 [2024-11-27 08:10:02.556097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.477 [2024-11-27 08:10:02.556105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.477 [2024-11-27 08:10:02.556284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.477 [2024-11-27 08:10:02.556463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.477 [2024-11-27 08:10:02.556471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.477 [2024-11-27 08:10:02.556479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.477 [2024-11-27 08:10:02.556485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.477 [2024-11-27 08:10:02.568837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.477 [2024-11-27 08:10:02.569214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.477 [2024-11-27 08:10:02.569232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.477 [2024-11-27 08:10:02.569240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.477 [2024-11-27 08:10:02.569419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.477 [2024-11-27 08:10:02.569598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.477 [2024-11-27 08:10:02.569609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.477 [2024-11-27 08:10:02.569617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.477 [2024-11-27 08:10:02.569624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.739 [2024-11-27 08:10:02.582089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.739 [2024-11-27 08:10:02.582469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.739 [2024-11-27 08:10:02.582487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.739 [2024-11-27 08:10:02.582495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.739 [2024-11-27 08:10:02.582676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.739 [2024-11-27 08:10:02.582855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.739 [2024-11-27 08:10:02.582863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.739 [2024-11-27 08:10:02.582870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.739 [2024-11-27 08:10:02.582876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.739 [2024-11-27 08:10:02.595102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.739 [2024-11-27 08:10:02.595488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.739 [2024-11-27 08:10:02.595505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.739 [2024-11-27 08:10:02.595512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.739 [2024-11-27 08:10:02.595691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.739 [2024-11-27 08:10:02.595870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.739 [2024-11-27 08:10:02.595879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.739 [2024-11-27 08:10:02.595885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.739 [2024-11-27 08:10:02.595892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.739 [2024-11-27 08:10:02.598527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:08.739 [2024-11-27 08:10:02.608237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.739 [2024-11-27 08:10:02.608549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.739 [2024-11-27 08:10:02.608568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.739 [2024-11-27 08:10:02.608577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.739 [2024-11-27 08:10:02.608756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.739 [2024-11-27 08:10:02.608936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.739 [2024-11-27 08:10:02.608945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.739 [2024-11-27 08:10:02.608965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.739 [2024-11-27 08:10:02.608972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.739 [2024-11-27 08:10:02.621362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.739 [2024-11-27 08:10:02.621735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.739 [2024-11-27 08:10:02.621753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.739 [2024-11-27 08:10:02.621761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.739 [2024-11-27 08:10:02.621940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.739 [2024-11-27 08:10:02.622123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.739 [2024-11-27 08:10:02.622132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.739 [2024-11-27 08:10:02.622139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.739 [2024-11-27 08:10:02.622146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.739 [2024-11-27 08:10:02.634411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.739 [2024-11-27 08:10:02.634795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.739 [2024-11-27 08:10:02.634812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.739 [2024-11-27 08:10:02.634820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.739 [2024-11-27 08:10:02.635004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.739 [2024-11-27 08:10:02.635185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.739 [2024-11-27 08:10:02.635193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.739 [2024-11-27 08:10:02.635200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.739 [2024-11-27 08:10:02.635206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.739 [2024-11-27 08:10:02.641337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.739 [2024-11-27 08:10:02.641362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.739 [2024-11-27 08:10:02.641369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:08.739 [2024-11-27 08:10:02.641375] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:08.739 [2024-11-27 08:10:02.641379] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.739 [2024-11-27 08:10:02.642777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:08.739 [2024-11-27 08:10:02.642864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:08.739 [2024-11-27 08:10:02.642865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.740 [2024-11-27 08:10:02.647493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.740 [2024-11-27 08:10:02.647877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.740 [2024-11-27 08:10:02.647896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.740 [2024-11-27 08:10:02.647904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.740 [2024-11-27 08:10:02.648092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.740 [2024-11-27 08:10:02.648273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.740 [2024-11-27 08:10:02.648282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.740 [2024-11-27 08:10:02.648289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.740 [2024-11-27 08:10:02.648296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.740 [2024-11-27 08:10:02.660701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.740 [2024-11-27 08:10:02.661027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.740 [2024-11-27 08:10:02.661048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.740 [2024-11-27 08:10:02.661057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.740 [2024-11-27 08:10:02.661237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.740 [2024-11-27 08:10:02.661418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.740 [2024-11-27 08:10:02.661427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.740 [2024-11-27 08:10:02.661435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.740 [2024-11-27 08:10:02.661442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.740 [2024-11-27 08:10:02.673852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.740 [2024-11-27 08:10:02.674255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.740 [2024-11-27 08:10:02.674275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.740 [2024-11-27 08:10:02.674284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.740 [2024-11-27 08:10:02.674464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.740 [2024-11-27 08:10:02.674643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.740 [2024-11-27 08:10:02.674651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.740 [2024-11-27 08:10:02.674658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.740 [2024-11-27 08:10:02.674666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.740 [2024-11-27 08:10:02.687069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.740 [2024-11-27 08:10:02.687509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.740 [2024-11-27 08:10:02.687529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.740 [2024-11-27 08:10:02.687537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.740 [2024-11-27 08:10:02.687716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.740 [2024-11-27 08:10:02.687896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.740 [2024-11-27 08:10:02.687911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.740 [2024-11-27 08:10:02.687919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.740 [2024-11-27 08:10:02.687927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.740 [2024-11-27 08:10:02.700173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.740 [2024-11-27 08:10:02.700562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.740 [2024-11-27 08:10:02.700583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.740 [2024-11-27 08:10:02.700592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.740 [2024-11-27 08:10:02.700771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.740 [2024-11-27 08:10:02.700958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.740 [2024-11-27 08:10:02.700968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.740 [2024-11-27 08:10:02.700975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.740 [2024-11-27 08:10:02.700983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.740 [2024-11-27 08:10:02.713381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.740 [2024-11-27 08:10:02.713777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.740 [2024-11-27 08:10:02.713795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.740 [2024-11-27 08:10:02.713802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.740 [2024-11-27 08:10:02.713987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.740 [2024-11-27 08:10:02.714167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.740 [2024-11-27 08:10:02.714176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.740 [2024-11-27 08:10:02.714183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.740 [2024-11-27 08:10:02.714190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.740 [2024-11-27 08:10:02.726588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.740 [2024-11-27 08:10:02.726895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.740 [2024-11-27 08:10:02.726912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.740 [2024-11-27 08:10:02.726919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.740 [2024-11-27 08:10:02.727101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.740 [2024-11-27 08:10:02.727280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.740 [2024-11-27 08:10:02.727289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.740 [2024-11-27 08:10:02.727296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.740 [2024-11-27 08:10:02.727307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.740 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:08.740 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:08.740 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:08.740 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:08.740 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.741 [2024-11-27 08:10:02.739725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.741 [2024-11-27 08:10:02.740125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.741 [2024-11-27 08:10:02.740143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.741 [2024-11-27 08:10:02.740151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.741 [2024-11-27 08:10:02.740331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.741 [2024-11-27 08:10:02.740511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.741 [2024-11-27 08:10:02.740521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.741 [2024-11-27 08:10:02.740528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.741 [2024-11-27 08:10:02.740535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.741 [2024-11-27 08:10:02.752798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.741 [2024-11-27 08:10:02.753156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.741 [2024-11-27 08:10:02.753173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.741 [2024-11-27 08:10:02.753180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.741 [2024-11-27 08:10:02.753359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.741 [2024-11-27 08:10:02.753538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.741 [2024-11-27 08:10:02.753547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.741 [2024-11-27 08:10:02.753554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.741 [2024-11-27 08:10:02.753561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.741 [2024-11-27 08:10:02.765950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.741 [2024-11-27 08:10:02.766260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.741 [2024-11-27 08:10:02.766277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.741 [2024-11-27 08:10:02.766285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.741 [2024-11-27 08:10:02.766463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.741 [2024-11-27 08:10:02.766641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.741 [2024-11-27 08:10:02.766650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.741 [2024-11-27 08:10:02.766660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.741 [2024-11-27 08:10:02.766667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.741 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.741 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:08.741 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.741 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.741 [2024-11-27 08:10:02.779052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.741 [2024-11-27 08:10:02.779392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.741 [2024-11-27 08:10:02.779409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.741 [2024-11-27 08:10:02.779416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.741 [2024-11-27 08:10:02.779594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.741 [2024-11-27 08:10:02.779773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.741 [2024-11-27 08:10:02.779782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.741 [2024-11-27 08:10:02.779789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.741 [2024-11-27 08:10:02.779795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.741 [2024-11-27 08:10:02.779941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.741 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.741 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:08.741 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.741 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.741 [2024-11-27 08:10:02.792181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.741 [2024-11-27 08:10:02.792606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.741 [2024-11-27 08:10:02.792622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.741 [2024-11-27 08:10:02.792630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.741 [2024-11-27 08:10:02.792807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.741 [2024-11-27 08:10:02.792991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.741 [2024-11-27 08:10:02.793001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.741 [2024-11-27 08:10:02.793008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.741 [2024-11-27 08:10:02.793014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.741 [2024-11-27 08:10:02.805237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.741 [2024-11-27 08:10:02.805661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.741 [2024-11-27 08:10:02.805678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.741 [2024-11-27 08:10:02.805690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.741 [2024-11-27 08:10:02.805868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.741 [2024-11-27 08:10:02.806052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.741 [2024-11-27 08:10:02.806062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.741 [2024-11-27 08:10:02.806069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.741 [2024-11-27 08:10:02.806076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.741 Malloc0 00:27:08.741 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.741 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:08.741 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.741 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.741 [2024-11-27 08:10:02.818299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.741 [2024-11-27 08:10:02.818726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.741 [2024-11-27 08:10:02.818744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.741 [2024-11-27 08:10:02.818751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.741 [2024-11-27 08:10:02.818930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.742 [2024-11-27 08:10:02.819114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.742 [2024-11-27 08:10:02.819123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.742 [2024-11-27 08:10:02.819130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.742 [2024-11-27 08:10:02.819136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.742 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.742 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:08.742 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.742 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.742 4804.00 IOPS, 18.77 MiB/s [2024-11-27T07:10:02.851Z] [2024-11-27 08:10:02.832337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.742 [2024-11-27 08:10:02.832759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.742 [2024-11-27 08:10:02.832777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x833510 with addr=10.0.0.2, port=4420 00:27:08.742 [2024-11-27 08:10:02.832784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833510 is same with the state(6) to be set 00:27:08.742 [2024-11-27 08:10:02.832967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833510 (9): Bad file descriptor 00:27:08.742 [2024-11-27 08:10:02.833147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.742 [2024-11-27 08:10:02.833156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.742 [2024-11-27 08:10:02.833163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.742 [2024-11-27 08:10:02.833173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.742 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.742 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.742 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.742 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.742 [2024-11-27 08:10:02.839593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.742 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.742 08:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2601484 00:27:08.742 [2024-11-27 08:10:02.845414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.001 [2024-11-27 08:10:02.916776] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:10.875 5503.14 IOPS, 21.50 MiB/s [2024-11-27T07:10:05.922Z] 6168.38 IOPS, 24.10 MiB/s [2024-11-27T07:10:06.857Z] 6682.67 IOPS, 26.10 MiB/s [2024-11-27T07:10:08.232Z] 7084.70 IOPS, 27.67 MiB/s [2024-11-27T07:10:09.167Z] 7418.09 IOPS, 28.98 MiB/s [2024-11-27T07:10:10.224Z] 7698.33 IOPS, 30.07 MiB/s [2024-11-27T07:10:11.167Z] 7929.62 IOPS, 30.98 MiB/s [2024-11-27T07:10:12.103Z] 8134.50 IOPS, 31.78 MiB/s [2024-11-27T07:10:12.103Z] 8321.40 IOPS, 32.51 MiB/s 00:27:17.994 Latency(us) 00:27:17.994 [2024-11-27T07:10:12.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.994 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:17.994 Verification LBA range: start 0x0 length 0x4000 00:27:17.994 Nvme1n1 : 15.01 8322.77 32.51 10908.48 0.00 6635.70 676.73 17894.18 00:27:17.994 [2024-11-27T07:10:12.103Z] =================================================================================================================== 00:27:17.994 [2024-11-27T07:10:12.103Z] Total : 8322.77 32.51 10908.48 0.00 6635.70 676.73 17894.18 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:17.994 rmmod nvme_tcp 00:27:17.994 rmmod nvme_fabrics 00:27:17.994 rmmod nvme_keyring 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2602487 ']' 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2602487 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2602487 ']' 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2602487 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:17.994 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:18.253 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2602487 00:27:18.253 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:18.253 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:18.253 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2602487' 00:27:18.253 killing process with pid 2602487 00:27:18.253 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2602487 00:27:18.253 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2602487 00:27:18.253 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:18.253 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:18.253 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:18.253 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:18.253 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:18.253 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:18.253 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:18.253 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:18.253 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:18.253 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.253 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.253 08:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:20.786 00:27:20.786 real 0m25.124s 00:27:20.786 user 1m0.499s 00:27:20.786 sys 0m5.949s 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:20.786 ************************************ 00:27:20.786 END TEST nvmf_bdevperf 00:27:20.786 ************************************ 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.786 ************************************ 00:27:20.786 START TEST nvmf_target_disconnect 00:27:20.786 ************************************ 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:20.786 * Looking for test storage... 00:27:20.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:20.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.786 --rc genhtml_branch_coverage=1 00:27:20.786 --rc genhtml_function_coverage=1 00:27:20.786 --rc genhtml_legend=1 00:27:20.786 --rc geninfo_all_blocks=1 00:27:20.786 --rc geninfo_unexecuted_blocks=1 00:27:20.786 00:27:20.786 ' 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:20.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.786 --rc genhtml_branch_coverage=1 00:27:20.786 --rc genhtml_function_coverage=1 00:27:20.786 --rc genhtml_legend=1 00:27:20.786 --rc geninfo_all_blocks=1 00:27:20.786 --rc geninfo_unexecuted_blocks=1 00:27:20.786 00:27:20.786 ' 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:20.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.786 --rc genhtml_branch_coverage=1 00:27:20.786 --rc genhtml_function_coverage=1 00:27:20.786 --rc genhtml_legend=1 00:27:20.786 --rc geninfo_all_blocks=1 00:27:20.786 --rc geninfo_unexecuted_blocks=1 00:27:20.786 00:27:20.786 ' 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:20.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.786 --rc genhtml_branch_coverage=1 00:27:20.786 --rc genhtml_function_coverage=1 00:27:20.786 --rc genhtml_legend=1 00:27:20.786 --rc geninfo_all_blocks=1 00:27:20.786 --rc geninfo_unexecuted_blocks=1 00:27:20.786 00:27:20.786 ' 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.786 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:20.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:20.787 08:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:26.059 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:26.059 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:26.059 Found net devices under 0000:86:00.0: cvl_0_0 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:26.059 Found net devices under 0000:86:00.1: cvl_0_1 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:26.059 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.060 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.060 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.060 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:26.060 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:26.060 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:26.060 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:26.060 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:26.060 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:26.060 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.060 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:26.060 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:26.060 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.060 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.060 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.060 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.060 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:26.060 08:10:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:26.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:27:26.060 00:27:26.060 --- 10.0.0.2 ping statistics --- 00:27:26.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.060 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:27:26.060 00:27:26.060 --- 10.0.0.1 ping statistics --- 00:27:26.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.060 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:26.060 ************************************ 00:27:26.060 START TEST nvmf_target_disconnect_tc1 00:27:26.060 ************************************ 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:26.060 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:26.320 [2024-11-27 08:10:20.211233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-11-27 08:10:20.211274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b0ac0 with addr=10.0.0.2, port=4420 00:27:26.320 [2024-11-27 08:10:20.211293] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:26.320 [2024-11-27 08:10:20.211303] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:26.320 [2024-11-27 08:10:20.211309] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:26.320 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:26.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:26.320 Initializing NVMe Controllers 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:26.320 00:27:26.320 real 0m0.090s 00:27:26.320 user 0m0.037s 00:27:26.320 sys 0m0.053s 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:26.320 ************************************ 00:27:26.320 END TEST nvmf_target_disconnect_tc1 00:27:26.320 ************************************ 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:26.320 ************************************ 00:27:26.320 START TEST nvmf_target_disconnect_tc2 00:27:26.320 ************************************ 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2607590 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2607590 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2607590 ']' 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:26.320 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.320 [2024-11-27 08:10:20.348823] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:27:26.320 [2024-11-27 08:10:20.348867] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.580 [2024-11-27 08:10:20.428524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:26.580 [2024-11-27 08:10:20.471623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.580 [2024-11-27 08:10:20.471660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.580 [2024-11-27 08:10:20.471668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.580 [2024-11-27 08:10:20.471676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.580 [2024-11-27 08:10:20.471682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.580 [2024-11-27 08:10:20.473379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:26.580 [2024-11-27 08:10:20.473486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:26.580 [2024-11-27 08:10:20.473594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:26.580 [2024-11-27 08:10:20.473594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.580 Malloc0 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.580 [2024-11-27 08:10:20.646694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.580 [2024-11-27 08:10:20.674945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2607622 00:27:26.580 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:26.839 08:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:28.756 08:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2607590 00:27:28.756 08:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Write completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Write completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Write completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Write completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Write completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Write completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Write completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Write completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Write completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Write completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Write completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Write completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Write completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Write completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 [2024-11-27 08:10:22.701315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Write completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.756 Read completed with error (sct=0, sc=8) 00:27:28.756 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 [2024-11-27 08:10:22.701524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 [2024-11-27 08:10:22.701718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Write completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 Read completed with error (sct=0, sc=8) 00:27:28.757 starting I/O failed 00:27:28.757 [2024-11-27 08:10:22.701914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.757 [2024-11-27 08:10:22.702048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.757 [2024-11-27 08:10:22.702071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.757 qpair failed and we were unable to recover it. 00:27:28.757 [2024-11-27 08:10:22.702267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.757 [2024-11-27 08:10:22.702278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.757 qpair failed and we were unable to recover it. 00:27:28.757 [2024-11-27 08:10:22.702381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.757 [2024-11-27 08:10:22.702392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.757 qpair failed and we were unable to recover it. 00:27:28.757 [2024-11-27 08:10:22.702496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.757 [2024-11-27 08:10:22.702507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.757 qpair failed and we were unable to recover it. 00:27:28.757 [2024-11-27 08:10:22.702582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.757 [2024-11-27 08:10:22.702592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.757 qpair failed and we were unable to recover it. 00:27:28.757 [2024-11-27 08:10:22.702686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.757 [2024-11-27 08:10:22.702697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.757 qpair failed and we were unable to recover it. 00:27:28.757 [2024-11-27 08:10:22.702829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.757 [2024-11-27 08:10:22.702841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.757 qpair failed and we were unable to recover it. 00:27:28.757 [2024-11-27 08:10:22.703000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.757 [2024-11-27 08:10:22.703011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.757 qpair failed and we were unable to recover it. 00:27:28.757 [2024-11-27 08:10:22.703097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.757 [2024-11-27 08:10:22.703108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.757 qpair failed and we were unable to recover it. 00:27:28.757 [2024-11-27 08:10:22.703243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.703255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.703408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.703419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.703519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.703530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.703626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.703637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.703733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.703744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.703828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.703838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.703953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.703965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.704108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.704119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.704212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.704222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.704296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.704306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.704407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.704417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.704579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.704604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.704724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.704747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.704836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.704846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.704915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.704925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.705066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.705078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.705165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.705175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.705241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.705251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.705322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.705332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.705407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.705417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.705570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.705581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.705779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.705790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.705869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.705879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.705954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.705965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.706026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.706042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.706116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.706126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.706204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.706214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.706279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.706289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.706368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.706379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.706446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.706456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.706554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.706564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.706628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.706638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.706726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.706736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.706900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.706910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.706982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.706993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.707062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.707072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.707147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.707157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.707243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.758 [2024-11-27 08:10:22.707254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.758 qpair failed and we were unable to recover it. 00:27:28.758 [2024-11-27 08:10:22.707391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.707401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.707482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.707492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.707558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.707568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.707633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.707642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.707708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.707718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.707783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.707792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.707860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.707870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.707933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.707942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.708080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.708089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.708174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.708183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.708280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.708290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.708358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.708367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.708443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.708453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.708595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.708609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.708696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.708715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.708871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.708882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.708956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.708968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.709037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.709047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.709177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.709186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.709285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.709295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.709431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.709444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.709539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.709549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.709630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.709640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.709711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.709721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.709796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.709809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.709897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.709911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.709991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.710005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.710078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.710091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.710186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.710199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.710287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.710300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.710382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.710396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.710489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.710503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.710579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.710592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.710662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.710675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.710750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.710763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.710840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.710852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.710934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.710954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.711116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.759 [2024-11-27 08:10:22.711132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.759 qpair failed and we were unable to recover it. 00:27:28.759 [2024-11-27 08:10:22.711211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.711224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.711297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.711310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.711410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.711428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.711590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.711604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.711690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.711704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.711782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.711795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.711889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.711902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.711983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.711997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.712073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.712086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.712238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.712252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.712397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.712411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.712489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.712502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.712577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.712590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.712682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.712696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.712867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.712880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.713001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.713016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.713102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.713117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.713188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.713201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.713278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.713291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.713442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.713456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.713526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.713538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.713719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.713751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.714021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.714054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.714316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.714348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.714610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.714641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.714886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.714918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.715096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.715129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.715326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.715358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.715508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.715522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.715732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.715769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.716003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.716037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.716240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.716272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.716406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.716437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.716577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.716608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.716796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.716828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.716973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.717006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.717206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.717237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.717367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.717397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.760 [2024-11-27 08:10:22.717587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.760 [2024-11-27 08:10:22.717601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.760 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.717762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.717793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.718072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.718106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.718253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.718284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.718427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.718458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.718760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.718798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.718943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.718973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.719084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.719098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.719254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.719269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.719481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.719496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.719747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.719762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.719918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.719932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.720117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.720134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.720315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.720329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.720424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.720435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.720544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.720555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.720755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.720794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.721001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.721035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.721187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.721229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.721425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.721458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.721659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.721692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.721959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.721994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.722137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.722170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.722316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.722349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.722488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.722520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.722749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.722783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.723029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.723063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.723269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.723302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.723488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.723499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.723632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.723643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.723848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.723882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.724167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.761 [2024-11-27 08:10:22.724201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.761 qpair failed and we were unable to recover it. 00:27:28.761 [2024-11-27 08:10:22.724351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.724385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.724610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.724643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.724910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.724941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.725087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.725121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.725348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.725380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.725516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.725549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.725730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.725764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.725992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.726025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.726226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.726259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.726453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.726485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.726762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.726795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.727018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.727029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.727141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.727152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.727257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.727274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.727451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.727466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.727722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.727737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.727926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.727970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.728195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.728229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.728437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.728469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.728730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.728764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.729018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.729052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.729334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.729367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.729512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.729545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.729819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.729834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.730119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.730135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.730234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.730249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.730345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.730363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.730519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.730534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.730775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.730807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.731005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.731039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.731151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.731182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.731430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.731463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.731658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.731689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.731895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.731927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.732069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.732084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.732279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.732294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.732389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.732402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.762 qpair failed and we were unable to recover it. 00:27:28.762 [2024-11-27 08:10:22.732481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.762 [2024-11-27 08:10:22.732496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.732629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.732644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.732720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.732734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.732970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.732986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.733137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.733151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.733329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.733370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.733578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.733610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.733901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.733934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.734225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.734258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.734455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.734488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.734617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.734633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.734773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.734787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.734995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.735010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.735172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.735186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.735297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.735311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.735411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.735426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.735636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.735652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.735811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.735827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.736049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.736064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.736299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.736314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.736437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.736452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.736556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.736570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.736771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.736787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.736890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.736905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.736998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.737012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.737237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.737252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.737357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.737372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.737565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.737580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.737777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.737792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.737886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.737903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.738114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.738129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.738362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.738378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.738629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.738644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.738786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.738801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.738956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.738971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.739125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.739140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.739305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.739320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.739469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.739484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.763 qpair failed and we were unable to recover it. 00:27:28.763 [2024-11-27 08:10:22.739751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.763 [2024-11-27 08:10:22.739766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.739861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.739876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.740104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.740120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.740265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.740279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.740463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.740479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.740672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.740687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.740929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.740971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.741197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.741229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.741499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.741531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.741740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.741773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.742031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.742065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.742312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.742345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.742564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.742598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.742792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.742825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.743097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.743131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.743324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.743357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.743485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.743518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.743791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.743824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.744012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.744034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.744134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.744149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.744406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.744420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.744510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.744524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.744706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.744720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.744823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.744838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.744993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.745009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.745178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.745192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.745283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.745297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.745506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.745520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.745749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.745764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.745861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.745875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.746021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.746036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.746121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.746135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.746233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.746247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.746486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.746517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.746662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.746694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.747005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.747039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.747182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.747214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.747354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.747386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.764 [2024-11-27 08:10:22.747625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.764 [2024-11-27 08:10:22.747641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.764 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.747806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.747820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.747978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.747993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.748089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.748104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.748315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.748329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.748494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.748509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.748709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.748724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.748932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.748955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.749131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.749145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.749327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.749360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.749628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.749658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.749789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.749822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.750024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.750040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.750140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.750155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.750256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.750270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.750422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.750437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.750646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.750661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.750840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.750854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.751017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.751051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.751230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.751262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.751462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.751494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.751707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.751722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.751882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.751897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.752094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.752109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.752294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.752309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.752420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.752434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.752637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.752672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.752783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.752815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.752957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.752991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.753129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.753161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.753291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.753323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.753507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.753540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.753829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.753869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.754088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.754102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.765 [2024-11-27 08:10:22.754245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.765 [2024-11-27 08:10:22.754260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.765 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.754355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.754371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.754596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.754611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.754760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.754775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.755005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.755038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.755256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.755288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.755416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.755448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.755654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.755669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.755882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.755896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.756007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.756021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.756244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.756258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.756442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.756456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.756559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.756574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.756820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.756834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.756997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.757016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.757274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.757289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.757381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.757394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.757584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.757599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.757840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.757855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.757977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.757992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.758180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.758195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.758336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.758350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.758541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.758574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.758789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.758820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.759119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.759154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.759345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.759377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.759555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.759587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.759726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.759757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.759945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.759990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.760181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.760212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.760362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.760395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.760591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.760605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.760690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.760703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.760878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.760893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.761038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.761053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.761158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.761173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.761358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.761372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.761635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.761668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.766 qpair failed and we were unable to recover it. 00:27:28.766 [2024-11-27 08:10:22.761856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.766 [2024-11-27 08:10:22.761886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.762117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.762151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.762304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.762335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.762690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.762728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.762927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.762942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.763099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.763114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.763333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.763365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.763616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.763647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.763854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.763896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.764045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.764060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.764292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.764307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.764401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.764416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.764521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.764536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.764704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.764718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.764890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.764904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.765043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.765058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.765242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.765257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.765410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.765425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.765585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.765599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.765777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.765809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.766017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.766052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.766241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.766274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.766498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.766530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.766802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.766834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.767080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.767096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.767284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.767298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.767544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.767558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.767722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.767737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.767877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.767891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.768044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.768059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.768269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.768284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.768455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.768470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.768672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.768704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.768973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.769007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.767 [2024-11-27 08:10:22.769207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.767 [2024-11-27 08:10:22.769239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.767 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.769420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.769451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.769711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.769742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.769927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.769982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.770136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.770169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.770311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.770343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.770533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.770564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.770829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.770861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.771127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.771143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.771302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.771316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.771474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.771516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.771745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.771776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.772031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.772065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.772204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.772219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.772323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.772338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.772504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.772518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.772601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.772614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.772796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.772810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.772914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.772928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.773128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.773143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.773240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.773254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.773419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.773434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.773590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.773604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.773762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.773778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.774034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.774068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.774277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.774308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.774590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.774622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.774830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.774862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.775118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.775151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.775302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.775334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.775580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.775612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.775782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.775796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.775974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.776007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.776152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.776185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.776384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.776415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.776561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.768 [2024-11-27 08:10:22.776593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.768 qpair failed and we were unable to recover it. 00:27:28.768 [2024-11-27 08:10:22.776787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.776818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.777087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.777105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.777205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.777220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.777322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.777337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.777579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.777594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.777741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.777786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.777900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.777931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.778075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.778108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.778389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.778422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.778657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.778688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.778820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.778852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.779072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.779087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.779192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.779207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.779354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.779369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.779519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.779534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.779784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.779820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.779999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.780039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.780293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.780327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.780666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.780698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.780851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.780884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.781183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.781218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.781501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.781535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.781722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.781737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.781994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.782010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.782164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.782178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.782426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.782458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.782580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.782612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.782829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.782862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.783056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.783076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.783261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.783276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.783424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.783439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.783538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.783552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.769 [2024-11-27 08:10:22.783714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.769 [2024-11-27 08:10:22.783729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.769 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.783912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.783927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.784095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.784111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.784211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.784225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.784374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.784388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.784649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.784668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.784826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.784840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.785073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.785107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.785349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.785381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.785679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.785720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.785968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.785984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.786193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.786208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.786362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.786376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.786499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.786514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.786734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.786749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.786938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.786957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.787123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.787137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.787280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.787314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.787462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.787494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.787690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.787722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.787999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.788032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.788233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.788265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.788477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.788508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.788772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.788805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.788957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.789002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.789228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.789243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.789478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.789493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.789670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.789686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.789997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.790031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.790179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.790212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.790484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.790517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.790772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.790804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.791084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.770 [2024-11-27 08:10:22.791118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.770 qpair failed and we were unable to recover it. 00:27:28.770 [2024-11-27 08:10:22.791331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.791364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.791584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.791617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.791833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.791865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.792038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.792056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.792222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.792255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.792479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.792512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.792776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.792819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.792984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.793000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.793189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.793205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.793310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.793325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.793498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.793513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.793674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.793690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.793858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.793873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.794063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.794079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.794187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.794203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.794382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.794398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.794486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.794501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.794735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.794750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.794895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.794911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.795033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.795050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.795304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.795320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.795531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.795553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.795795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.795810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.795964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.795979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.796141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.796184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.796474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.796508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.796728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.796761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.796957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.796991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.797196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.797229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.797371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.771 [2024-11-27 08:10:22.797403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.771 qpair failed and we were unable to recover it. 00:27:28.771 [2024-11-27 08:10:22.797607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.797641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.797838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.797869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.798058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.798093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.798306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.798338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.798520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.798552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.798696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.798728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.799016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.799032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.799175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.799190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.799382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.799412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.799625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.799658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.799877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.799909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.800036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.800052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.800218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.800233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.800329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.800348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.800529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.800544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.800692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.800707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.800884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.800925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.801082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.801115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.801250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.801283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.801509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.801541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.801687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.801720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.801932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.801981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.802191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.802206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.802302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.802318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.802427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.802443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.802542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.802557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.802784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.802799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.803012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.803029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.803186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.803200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.803286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.803300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.803471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.803486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.803666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.803697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.803897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.803928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.804151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.804183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.804379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.804411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.804708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.804740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.805005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.805021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.805131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.772 [2024-11-27 08:10:22.805146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.772 qpair failed and we were unable to recover it. 00:27:28.772 [2024-11-27 08:10:22.805305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.805320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.805537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.805569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.805927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.806014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.806241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.806259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.806420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.806435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.806600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.806615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.806788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.806803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.806973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.806989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.807097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.807111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.807320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.807335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.807608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.807622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.807832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.807847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.808104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.808120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.808235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.808249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.808468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.808483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.808735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.808750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.809000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.809032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.809177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.809209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.809482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.809514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.809809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.809840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.810029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.810044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.810219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.810251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.810457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.810490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.810692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.810723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.811034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.811050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.811216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.811231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.811333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.811347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.811558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.811572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.811793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.811808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.811996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.812015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.812231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.812246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.812332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.812345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.812510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.812525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.812751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.812766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.812926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.812941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.813181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.813214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.773 qpair failed and we were unable to recover it. 00:27:28.773 [2024-11-27 08:10:22.813348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.773 [2024-11-27 08:10:22.813380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.813528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.813558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.813748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.813781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.814067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.814082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.814247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.814262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.814449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.814482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.814716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.814747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.815036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.815052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.815228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.815242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.815408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.815423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.815603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.815633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.815902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.815934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.816124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.816160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.816316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.816331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.816575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.816607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.816905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.816937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.817214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.817247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.817524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.817556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.817839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.817870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.818181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.818213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.818404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.818435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.818710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.818743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.818869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.818883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.819035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.819050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.819219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.819234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.819338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.819352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.819592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.819607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.819833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.819848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.820005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.820021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.820129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.820144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.820231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.820244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.820359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.820374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.820561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.774 [2024-11-27 08:10:22.820593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.774 qpair failed and we were unable to recover it. 00:27:28.774 [2024-11-27 08:10:22.820866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.820898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.821170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.821190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.821381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.821395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.821636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.821651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.821801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.821816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.822004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.822019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.822240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.822273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.822464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.822494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.822687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.822702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.822866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.822881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.822976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.822990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.823148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.823163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.823387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.823420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.823685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.823717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.824011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.824043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.824250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.824283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.824481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.824512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.824713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.824744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.825014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.825049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.825327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.825342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.825510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.825525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.825768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.825783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.825960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.825976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.826084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.826099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.826306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.826321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.826510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.826524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.826779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.826793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.827057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.827072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.827234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.827252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.827460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.827475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.827730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.827762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.827898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.827929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.828101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.828135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.775 qpair failed and we were unable to recover it. 00:27:28.775 [2024-11-27 08:10:22.828328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.775 [2024-11-27 08:10:22.828342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.828461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.828475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.828740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.828755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.828941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.828963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.829058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.829071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.829300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.829316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.829479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.829494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.829664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.829679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.829934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.829953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.830122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.830163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.830409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.830439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.830711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.830743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.831024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.831040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.831143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.831158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.831262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.831277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.831397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.831412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.831590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.831605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.831757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.831771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.832025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.832040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.832262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.832278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.832443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.832459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.832639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.832654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.832819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.832834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.833078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.833093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.833304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.833318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.833499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.833514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.833703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.833734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.833957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.833991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.834213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.834245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.834447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.834479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.834675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.834707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.834883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.834897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.835144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.835159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.835325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.835339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.835430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.835443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.835564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.835579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.835683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.835700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.835792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.835810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.835970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.776 [2024-11-27 08:10:22.835986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.776 qpair failed and we were unable to recover it. 00:27:28.776 [2024-11-27 08:10:22.836151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.836165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.836256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.836269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.836450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.836482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.836605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.836637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.836856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.836887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.837092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.837108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.837316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.837330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.837491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.837506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.837671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.837685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.837883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.837914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.838204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.838238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.838394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.838426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.838650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.838681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.838833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.838865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.839006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.839039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.839229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.839260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.839462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.839495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.839691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.839722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.839916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.839956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.840093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.840108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.840215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.840230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.840380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.840394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.840634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.840665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.840912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.840944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.841183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.841201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.841365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.841379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.841528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.841543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.841738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.841753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.841963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.841978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.842071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.842084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.842187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.842202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.842377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.842391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.842639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.842653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.842851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.842883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.843011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.843044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.843291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.843324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.843507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.843538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.843666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.843698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:28.777 qpair failed and we were unable to recover it. 00:27:28.777 [2024-11-27 08:10:22.844053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.777 [2024-11-27 08:10:22.844092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.844281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.844310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.844410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.844422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.844599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.844611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.844789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.844823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.845041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.845076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.845288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.845321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.845513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.845546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.845820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.845853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.846046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.846080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.846329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.846362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.846568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.846600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.846853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.846886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.847084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.847128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.847306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.847340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.847607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.847640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.847852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.847886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.848089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.848100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.848210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.848221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.848307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.848317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.848518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.848529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.848732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.848744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.848921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.848932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.849166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.849178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.849244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.849254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.849364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.849374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.849441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.849452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.849598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.849610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.849859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.849871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.849967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.849978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.850108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.850119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.850323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.850334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.850508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.850519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.850681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.850692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.850842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.850853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.851071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.851083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.851151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.851161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.851315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.778 [2024-11-27 08:10:22.851326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.778 qpair failed and we were unable to recover it. 00:27:28.778 [2024-11-27 08:10:22.851459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.779 [2024-11-27 08:10:22.851471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.779 qpair failed and we were unable to recover it. 00:27:28.779 [2024-11-27 08:10:22.851656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.779 [2024-11-27 08:10:22.851667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:28.779 qpair failed and we were unable to recover it. 00:27:28.779 [2024-11-27 08:10:22.851759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.779 [2024-11-27 08:10:22.851779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.779 qpair failed and we were unable to recover it. 00:27:28.779 [2024-11-27 08:10:22.851858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.779 [2024-11-27 08:10:22.851873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.779 qpair failed and we were unable to recover it. 00:27:28.779 [2024-11-27 08:10:22.852102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.779 [2024-11-27 08:10:22.852118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.779 qpair failed and we were unable to recover it. 00:27:28.779 [2024-11-27 08:10:22.852297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.779 [2024-11-27 08:10:22.852312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.779 qpair failed and we were unable to recover it. 00:27:28.779 [2024-11-27 08:10:22.852520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.779 [2024-11-27 08:10:22.852553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.779 qpair failed and we were unable to recover it. 00:27:28.779 [2024-11-27 08:10:22.852824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.779 [2024-11-27 08:10:22.852856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.779 qpair failed and we were unable to recover it. 00:27:28.779 [2024-11-27 08:10:22.853056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.779 [2024-11-27 08:10:22.853072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.779 qpair failed and we were unable to recover it. 00:27:28.779 [2024-11-27 08:10:22.853192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.779 [2024-11-27 08:10:22.853207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.779 qpair failed and we were unable to recover it. 00:27:28.779 [2024-11-27 08:10:22.853391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.779 [2024-11-27 08:10:22.853406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.779 qpair failed and we were unable to recover it. 00:27:28.779 [2024-11-27 08:10:22.853592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.779 [2024-11-27 08:10:22.853607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:28.779 qpair failed and we were unable to recover it. 00:27:29.058 [2024-11-27 08:10:22.853784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-11-27 08:10:22.853798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.058 qpair failed and we were unable to recover it. 00:27:29.058 [2024-11-27 08:10:22.854013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-11-27 08:10:22.854029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.058 qpair failed and we were unable to recover it. 00:27:29.058 [2024-11-27 08:10:22.854226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-11-27 08:10:22.854240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.058 qpair failed and we were unable to recover it. 00:27:29.058 [2024-11-27 08:10:22.854485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-11-27 08:10:22.854504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.058 qpair failed and we were unable to recover it. 00:27:29.058 [2024-11-27 08:10:22.854674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-11-27 08:10:22.854689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.058 qpair failed and we were unable to recover it. 00:27:29.058 [2024-11-27 08:10:22.854855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-11-27 08:10:22.854870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.058 qpair failed and we were unable to recover it. 00:27:29.058 [2024-11-27 08:10:22.855098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-11-27 08:10:22.855113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.058 qpair failed and we were unable to recover it. 00:27:29.058 [2024-11-27 08:10:22.855276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-11-27 08:10:22.855291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.058 qpair failed and we were unable to recover it. 00:27:29.058 [2024-11-27 08:10:22.855407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-11-27 08:10:22.855422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.058 qpair failed and we were unable to recover it. 00:27:29.058 [2024-11-27 08:10:22.855640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-11-27 08:10:22.855655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.058 qpair failed and we were unable to recover it. 00:27:29.058 [2024-11-27 08:10:22.855752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-11-27 08:10:22.855766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.058 qpair failed and we were unable to recover it. 00:27:29.058 [2024-11-27 08:10:22.855928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-11-27 08:10:22.855944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.058 qpair failed and we were unable to recover it. 00:27:29.058 [2024-11-27 08:10:22.856203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-11-27 08:10:22.856218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.058 qpair failed and we were unable to recover it. 00:27:29.058 [2024-11-27 08:10:22.856328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-11-27 08:10:22.856342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.058 qpair failed and we were unable to recover it. 00:27:29.058 [2024-11-27 08:10:22.856448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-11-27 08:10:22.856463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.058 qpair failed and we were unable to recover it. 00:27:29.058 [2024-11-27 08:10:22.856729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-11-27 08:10:22.856745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.058 qpair failed and we were unable to recover it. 00:27:29.058 [2024-11-27 08:10:22.856955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-11-27 08:10:22.856970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.058 qpair failed and we were unable to recover it. 00:27:29.058 [2024-11-27 08:10:22.857197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-11-27 08:10:22.857212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.857388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.857402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.857514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.857528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.857762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.857777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.857978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.857994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.858135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.858150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.858263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.858278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.858513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.858528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.858629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.858644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.858882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.858927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.859087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.859121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.859303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.859336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.859457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.859490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.859846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.859916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.860154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.860202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.860456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.860488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.860710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.860743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.860935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.860979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.861134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.861149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.861367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.861398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.861675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.861706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.861960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.861994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.862129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.862160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.862344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.862377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.862513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.862543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.862766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.862799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.863094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.863128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.863405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.863437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.863696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.863728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.863984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.864018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.864205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.864220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.864428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.864443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.864711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.864743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.865020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.865052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.865187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.865202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.865381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.865395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.865605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.865619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.865714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.865729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.059 [2024-11-27 08:10:22.865958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-11-27 08:10:22.865973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.059 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.866059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.866073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.866309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.866327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.866489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.866503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.866696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.866711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.866864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.866878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.867045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.867061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.867273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.867287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.867441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.867456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.867699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.867713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.867815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.867829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.867985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.868000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.868238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.868270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.868582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.868614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.868817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.868849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.869103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.869136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.869288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.869321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.869509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.869540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.869808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.869840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.869965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.869999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.870183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.870215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.870438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.870469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.870780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.870825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.870968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.870983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.871141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.871156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.871309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.871323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.871505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.871519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.871729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.871744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.871906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.871921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.872109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.872142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.872295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.872328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.872548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.872579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.872770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.872802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.873076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.873108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.873229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.873261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.873397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.873428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.873615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.873647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.873833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.873864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.060 [2024-11-27 08:10:22.874135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-11-27 08:10:22.874169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.060 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.874301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.874315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.874420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.874434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.874694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.874708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.874932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.874958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.875210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.875238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.875399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.875411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.875558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.875569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.875787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.875819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.876077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.876112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.876291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.876303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.876409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.876420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.876585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.876597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.876693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.876703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.876920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.876931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.877072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.877084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.877232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.877243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.877381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.877392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.877471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.877486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.877623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.877634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.877729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.877739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.877818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.877828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.877966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.877977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.878178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.878189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.878328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.878339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.878628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.878639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.878841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.878852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.879093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.879105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.879334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.879345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.879557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.879568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.879743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.879754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.879905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.879916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.880065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.880076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.880229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.880240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.880343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.880354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.880555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.880567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.880714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.880725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.880882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.880893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.061 [2024-11-27 08:10:22.881100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.061 [2024-11-27 08:10:22.881111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.061 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.881345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.881356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.881445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.881455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.881713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.881724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.881970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.881982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.882137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.882148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.882248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.882260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.882438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.882449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.882602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.882613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.882868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.882879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.883074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.883114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.883367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.883400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.883595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.883628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.883905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.883937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.884194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.884206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.884369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.884380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.884523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.884534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.884763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.884774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.885000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.885021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.885165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.885176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.885406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.885445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.885656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.885689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.885871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.885903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.886181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.886192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.886276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.886286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.886495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.886506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.886755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.886766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.887017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.887029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.887252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.887263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.887405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.887416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.887566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.887577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.887800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.887811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.887967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.887978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.888206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.888240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.888442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.888474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.888674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.888707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.888975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.889010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.889205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.889238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.062 qpair failed and we were unable to recover it. 00:27:29.062 [2024-11-27 08:10:22.889498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.062 [2024-11-27 08:10:22.889509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.889728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.889739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.889890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.889902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.890053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.890064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.890214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.890226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.890455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.890466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.890664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.890675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.890913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.890924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.891086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.891097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.891330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.891368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.891655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.891688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.891969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.892003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.892278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.892311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.892589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.892622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.892900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.892932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.893217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.893250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.893529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.893562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.893841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.893874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.894123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.894158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.894453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.894464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.894619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.894629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.894835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.894867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.895061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.895095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.895383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.895417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.895668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.895701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.895981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.896016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.896236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.896269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.896486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.896518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.896793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.896827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.897058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.897070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.897216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.897227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.897395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.897422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.063 [2024-11-27 08:10:22.897624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.063 [2024-11-27 08:10:22.897657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.063 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.897874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.897905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.898098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.898109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.898256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.898267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.898404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.898415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.898516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.898527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.898728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.898739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.898946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.898972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.899253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.899264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.899462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.899472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.899696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.899707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.899956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.899967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.900083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.900093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.900182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.900192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.900415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.900426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.900586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.900597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.900771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.900782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.900960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.901000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.901273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.901305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.901495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.901527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.901771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.901803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.902067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.902078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.902254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.902265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.902491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.902502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.902660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.902681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.902912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.902945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.903154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.903188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.903383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.903394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.903549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.903584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.903856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.903889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.904169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.904204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.904486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.904518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.904792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.904824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.905113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.905146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.905395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.905427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.905622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.905654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.905930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.905974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.906199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.906232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.906492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.064 [2024-11-27 08:10:22.906503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.064 qpair failed and we were unable to recover it. 00:27:29.064 [2024-11-27 08:10:22.906709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.906719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.906918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.906929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.907123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.907135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.907342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.907353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.907552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.907563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.907793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.907824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.908046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.908081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.908270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.908281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.908382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.908393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.908542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.908553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.908776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.908787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.908962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.909006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.909201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.909234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.909486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.909519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.909764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.909797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.910075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.910086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.910307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.910317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.910464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.910475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.910722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.910768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.911030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.911063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.911259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.911296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.911511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.911522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.911751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.911762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.912031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.912043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.912265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.912276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.912425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.912436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.912512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.912522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.912744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.912755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.912903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.912914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.913066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.913078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.913169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.913180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.913375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.913386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.913609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.913620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.913815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.913826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.913958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.913969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.914122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.914133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.914331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.914342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.914568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.065 [2024-11-27 08:10:22.914579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.065 qpair failed and we were unable to recover it. 00:27:29.065 [2024-11-27 08:10:22.914788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.914799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.914965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.914976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.915134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.915146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.915280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.915290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.915554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.915565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.915792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.915803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.916048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.916060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.916248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.916281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.916484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.916517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.916820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.916853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.917117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.917151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.917365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.917396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.917644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.917677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.917885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.917918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.918189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.918223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.918453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.918486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.918782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.918815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.919087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.919121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.919329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.919368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.919580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.919591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.919846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.919860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.920058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.920069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.920330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.920364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.920637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.920670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.920966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.921001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.921265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.921297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.921565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.921576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.921719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.921730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.921812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.921822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.921994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.922005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.922097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.922107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.922245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.922255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.922458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.922469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.922607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.922619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.922860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.922870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.923091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.923102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.923359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.923370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.923573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.066 [2024-11-27 08:10:22.923583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.066 qpair failed and we were unable to recover it. 00:27:29.066 [2024-11-27 08:10:22.923681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.923692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.923769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.923779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.924005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.924017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.924152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.924163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.924333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.924343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.924494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.924527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.924710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.924742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.924943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.924984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.925183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.925193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.925454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.925487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.925736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.925768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.926013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.926047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.926292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.926303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.926528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.926539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.926761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.926772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.927000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.927012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.927163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.927173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.927340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.927372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.927582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.927615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.927886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.927919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.928094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.928104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.928334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.928367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.928510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.928549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.928854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.928885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.929148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.929182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.929325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.929358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.929549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.929560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.929768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.929779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.929920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.929930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.930083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.930094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.930320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.930330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.930529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.930540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.930696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.930707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.067 [2024-11-27 08:10:22.930802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.067 [2024-11-27 08:10:22.930812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.067 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.930956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.930967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.931142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.931188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.931489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.931522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.931720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.931751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.932030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.932063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.932188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.932221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.932417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.932448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.932642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.932674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.932967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.933001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.933271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.933304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.933594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.933627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.933875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.933908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.934155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.934190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.934416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.934428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.934637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.934648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.934823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.934833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.935054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.935089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.935305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.935337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.935526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.935536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.935710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.935721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.935974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.936008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.936185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.936217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.936478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.936510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.936755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.936788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.937064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.937099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.937371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.937403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.937623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.937657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.937875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.937909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.938158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.938171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.938378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.938389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.938542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.938553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.938765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.938798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.939115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.939149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.068 qpair failed and we were unable to recover it. 00:27:29.068 [2024-11-27 08:10:22.939423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.068 [2024-11-27 08:10:22.939456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.939683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.939715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.939967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.940001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.940268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.940302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.940443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.940454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.940692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.940703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.940839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.940850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.941027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.941038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.941261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.941272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.941523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.941533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.941741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.941751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.941952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.941963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.942211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.942244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.942423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.942455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.942635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.942667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.942972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.943005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.943302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.943334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.943575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.943586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.943785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.943796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.944031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.944042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.944272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.944306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.944603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.944636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.944899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.944931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.945180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.945192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.945337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.945348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.945606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.945638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.945850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.945884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.946083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.946118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.946370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.946402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.946670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.946702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.946999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.947033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.947306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.947338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.947612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.947643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.947907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.947939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.948199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.948210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.948369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.948382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.948550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.948560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.948657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.069 [2024-11-27 08:10:22.948666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.069 qpair failed and we were unable to recover it. 00:27:29.069 [2024-11-27 08:10:22.948868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.948879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.949079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.949091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.949247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.949257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.949430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.949465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.949737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.949770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.949960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.949993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.950221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.950232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.950401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.950412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.950490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.950500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.950650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.950661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.950807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.950817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.951031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.951053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.951285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.951296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.951443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.951453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.951668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.951682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.951853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.951864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.952015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.952027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.952234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.952245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.952397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.952407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.952578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.952591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.952845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.952858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.953028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.953076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.953339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.953373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.953605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.953646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.953836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.953869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.954060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.954094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.954365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.954399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.954678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.954714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.954849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.954882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.955077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.955113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.955359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.955393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.955704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.955716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.955919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.955931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.956174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.956186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.956445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.956457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.956657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.956669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.956822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.070 [2024-11-27 08:10:22.956834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.070 qpair failed and we were unable to recover it. 00:27:29.070 [2024-11-27 08:10:22.957087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.957134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.957349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.957384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.957533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.957566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.957840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.957873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.958121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.958161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.958364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.958377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.958539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.958549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.958773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.958785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.959093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.959131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.959408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.959441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.959593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.959626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.959829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.959861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.960138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.960176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.960370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.960403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.960661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.960694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.960944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.960996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.961165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.961176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.961266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.961276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.961367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.961378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.961606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.961617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.961761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.961773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.961998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.962010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.962105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.962116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.962268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.962280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.962375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.962386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.962533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.962544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.962633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.962644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.962865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.962902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.963126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.963145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.963302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.963317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.963500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.963515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.963662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.963677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.963914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.963930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.964097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.964113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.964273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.071 [2024-11-27 08:10:22.964317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.071 qpair failed and we were unable to recover it. 00:27:29.071 [2024-11-27 08:10:22.964456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.964488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.964757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.964789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.965029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.965063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.965263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.965295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.965562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.965577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.965737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.965757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.965965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.966000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.966199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.966241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.966463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.966495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.966673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.966689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.966930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.966945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.967104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.967120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.967361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.967393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.967650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.967682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.967885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.967918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.968203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.968219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.968372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.968388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.968481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.968495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.968659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.968674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.968917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.968932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.969100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.969116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.969272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.969284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.969485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.969520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.969800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.969837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.970033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.970069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.970268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.970301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.970489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.970501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.970593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.970604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.970749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.970760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.970975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.971009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.971138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.971171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.971446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.072 [2024-11-27 08:10:22.971459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.072 qpair failed and we were unable to recover it. 00:27:29.072 [2024-11-27 08:10:22.971682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.971700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.971884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.971899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.972058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.972073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.972310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.972343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.972539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.972572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.972831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.972864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.973079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.973114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.973311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.973344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.973557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.973572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.973805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.973820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.974055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.974071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.974306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.974321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.974493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.974508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.974721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.974759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.974939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.974981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.975183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.975216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.975344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.975375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.975564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.975579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.975725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.975740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.975975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.976009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.976199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.976232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.976496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.976527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.976823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.976856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.977046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.977079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.977363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.977396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.977651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.977666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.977926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.977940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.978185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.978201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.978416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.978432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.978596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.978610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.978863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.978897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.979151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.979184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.979320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.979353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.979549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.979564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.979799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.979814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.980044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.980060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.980230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.073 [2024-11-27 08:10:22.980263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.073 qpair failed and we were unable to recover it. 00:27:29.073 [2024-11-27 08:10:22.980536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.980570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.980840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.980873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.981072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.981106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.981392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.981435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.981561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.981574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.981731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.981743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.981913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.981925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.982075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.982102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.982374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.982408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.982600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.982636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.982911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.982945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.983188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.983221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.983421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.983463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.983644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.983655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.983826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.983860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.983980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.984013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.984159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.984202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.984416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.984448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.984689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.984704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.984895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.984908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.985135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.985147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.985233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.985243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.985390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.985402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.985652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.985666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.985830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.985842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.986042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.986063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.986294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.986306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.986524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.986539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.986774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.986786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.987008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.987021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.987265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.987278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.987422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.987435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.987592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.987626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.987814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.987850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.988056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.988084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.988182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.988193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.988338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.988349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.074 [2024-11-27 08:10:22.988573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.074 [2024-11-27 08:10:22.988607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.074 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.988886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.988921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.989204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.989238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.989424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.989435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.989662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.989675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.989902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.989913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.990191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.990210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.990446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.990462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.990614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.990629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.990869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.990884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.991126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.991142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.991283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.991298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.991555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.991589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.991810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.991844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.992116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.992149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.992432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.992447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.992629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.992644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.992859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.992891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.993109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.993144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.993405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.993444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.993662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.993678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.993838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.993853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.994139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.994173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.994367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.994399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.994611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.994643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.994915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.994956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.995149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.995164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.995313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.995329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.995547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.995564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.995708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.995723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.995825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.995839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.995994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.996011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.996225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.996240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.996405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.996419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.996507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.996521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.996685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.996701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.996866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.996881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.997126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.997160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.075 [2024-11-27 08:10:22.997311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.075 [2024-11-27 08:10:22.997344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.075 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:22.997527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:22.997559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:22.997733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:22.997747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:22.997983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:22.998018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:22.998295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:22.998327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:22.998479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:22.998511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:22.998707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:22.998739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:22.999014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:22.999048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:22.999330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:22.999363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:22.999563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:22.999595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:22.999822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:22.999853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.000002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.000036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.000248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.000281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.000502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.000536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.000734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.000750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.000845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.000859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.001075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.001091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.001285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.001301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.001519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.001552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.001846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.001879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.002075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.002109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.002351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.002369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.002598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.002613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.002772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.002788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.002876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.002890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.003125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.003141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.003290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.003306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.003455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.003471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.003612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.003628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.003711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.003725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.003890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.003906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.004087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.004103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.004325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.004358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.004555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.004588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.004799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.004831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.005089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.005123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.005342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.005374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.076 [2024-11-27 08:10:23.005610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.076 [2024-11-27 08:10:23.005624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.076 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.005770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.005784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.005944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.005965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.006210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.006225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.006316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.006330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.006499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.006513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.006667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.006683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.006943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.006964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.007051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.007065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.007206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.007221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.007381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.007396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.007668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.007713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.008041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.008116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.008496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.008568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.008772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.008788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.009028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.009045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.009228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.009244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.009484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.009515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.009714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.009748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.009991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.010027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.010296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.010328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.010539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.010573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.010846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.010861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.011124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.011141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.011322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.011337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.011513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.011546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.011838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.011869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.012076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.012111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.012331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.012363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.012552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.012566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.012731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.012746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.012961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.012977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.013202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.013218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.013479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.013512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.077 qpair failed and we were unable to recover it. 00:27:29.077 [2024-11-27 08:10:23.013811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.077 [2024-11-27 08:10:23.013843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.014142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.014178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.014465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.014496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.014774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.014806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.015117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.015157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.015352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.015367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.015622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.015636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.015790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.015806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.015999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.016015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.016249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.016281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.016475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.016508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.016722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.016755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.017027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.017061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.017291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.017324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.017538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.017554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.017790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.017806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.017899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.017913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.018158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.018174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.018391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.018407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.018628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.018643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.018819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.018834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.018958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.018974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.019139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.019154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.019413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.019428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.019666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.019681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.019908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.019923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.020085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.020101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.020267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.020282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.020529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.020545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.020705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.020719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.020824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.020840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.021077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.021093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.021373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.021388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.021578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.021593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.021758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.021773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.022035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.022069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.022329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.022344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.022496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.022510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.078 [2024-11-27 08:10:23.022677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.078 [2024-11-27 08:10:23.022692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.078 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.022855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.022871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.022964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.022979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.023073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.023086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.023249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.023265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.023415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.023429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.023698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.023731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.024046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.024119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.024434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.024449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.024654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.024667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.024869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.024882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.025117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.025129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.025324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.025337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.025514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.025549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.025826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.025862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.026174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.026212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.026468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.026512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.026803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.026838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.027130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.027167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.027334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.027345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.027490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.027502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.027664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.027676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.027891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.027924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.028157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.028194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.028401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.028439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.028608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.028621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.028699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.028710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.028852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.028865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.029015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.029029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.029234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.029246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.029448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.029460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.029605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.029616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.029690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.029700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.029846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.029858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.030091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.030102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.030254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.030264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.030501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.030511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.079 [2024-11-27 08:10:23.030711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.079 [2024-11-27 08:10:23.030723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.079 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.030791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.030801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.030959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.030972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.031058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.031068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.031168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.031178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.031331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.031341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.031522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.031533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.031690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.031700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.031794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.031804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.031943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.031958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.032106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.032122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.032329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.032339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.032422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.032431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.032507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.032517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.032637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.032646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.032732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.032742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.032941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.032960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.033058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.033069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.033152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.033162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.033318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.033329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.033415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.033426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.033531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.033542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.033791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.033805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.034013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.034024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.034121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.034131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.034337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.034348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.034438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.034449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.034662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.034674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.034821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.034831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.034977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.034988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.035080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.035091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.035231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.035242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.035318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.035328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.035464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.035476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.035620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.035631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.035768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.035779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.036009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.036021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.036200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.080 [2024-11-27 08:10:23.036209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.080 qpair failed and we were unable to recover it. 00:27:29.080 [2024-11-27 08:10:23.036341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.036354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.036504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.036516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.036720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.036731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.036970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.036982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.037181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.037192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.037342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.037353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.037581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.037591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.037813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.037825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.037909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.037918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.038057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.038068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.038267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.038277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.038438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.038449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.038608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.038620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.038848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.038860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.038994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.039006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.039161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.039173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.039368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.039404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.039653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.039688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.039903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.039938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.040234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.040278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.040571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.040605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.040770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.040783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.040941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.040961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.041211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.041251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.041526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.041558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.041753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.041789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.041990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.042026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.042306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.042342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.042589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.042622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.042899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.042934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.043225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.043262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.043531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.043565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.043756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.043791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.081 [2024-11-27 08:10:23.044065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.081 [2024-11-27 08:10:23.044100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.081 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.044380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.044393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.044570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.044581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.044716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.044728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.044978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.044991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.045175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.045188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.045380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.045390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.045570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.045602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.045814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.045847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.046051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.046090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.046375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.046409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.046609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.046641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.046887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.046898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.047075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.047089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.047228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.047239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.047377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.047390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.047550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.047562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.047838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.047878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.048080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.048115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.048366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.048414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.048720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.048760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.048985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.049020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.049304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.049340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.049567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.049611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.049760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.049770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.049997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.050009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.050078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.050088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.050234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.050245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.050514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.050530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.050782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.050794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.051007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.051020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.051227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.051239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.051408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.051421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.051660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.051672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.051827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.051838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.052027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.052038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.052266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.082 [2024-11-27 08:10:23.052277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.082 qpair failed and we were unable to recover it. 00:27:29.082 [2024-11-27 08:10:23.052377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.052388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.052471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.052481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.052723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.052763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.053043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.053077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.053290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.053327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.053461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.053472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.053562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.053573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.053725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.053736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.054016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.054052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.054274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.054312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.054568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.054603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.054851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.054862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.055019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.055032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.055192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.055202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.055400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.055412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.055590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.055601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.055753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.055764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.055932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.055946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.056119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.056154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.056427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.056460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.056597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.056630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.056927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.056975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.057258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.057302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.057504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.057550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.057782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.057796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.058002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.058014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.058286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.058299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.058412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.058424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.058618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.058630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.058915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.058963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.059123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.059165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.059371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.059407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.059682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.059718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.059920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.059966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.060177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.060214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.060388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.060400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.060561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.060601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.083 qpair failed and we were unable to recover it. 00:27:29.083 [2024-11-27 08:10:23.060879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.083 [2024-11-27 08:10:23.060916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.061143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.061178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.061395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.061407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.061615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.061627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.061726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.061737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.061971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.061985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.062118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.062128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.062347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.062382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.062591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.062623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.062889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.062925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.063251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.063288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.063485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.063519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.063732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.063768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.063956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.063993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.064295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.064332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.064523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.064563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.064780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.064792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.064963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.064976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.065146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.065157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.065252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.065263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.065409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.065420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.065645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.065658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.065884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.065916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.066164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.066199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.066497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.066510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.066731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.066746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.066912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.066924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.067090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.067103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.067252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.067264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.067367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.067379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.067604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.067617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.067717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.067727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.067882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.067895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.068030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.068043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.068246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.068257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.068510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.068522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.068746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.068757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.084 [2024-11-27 08:10:23.068968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.084 [2024-11-27 08:10:23.068980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.084 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.069128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.069145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.069375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.069387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.069533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.069545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.069703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.069717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.069921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.069935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.070100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.070113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.070207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.070217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.070458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.070470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.070564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.070574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.070651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.070661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.070879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.070891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.071107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.071120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.071309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.071322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.071504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.071515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.071748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.071780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.071920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.071969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.072240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.072275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.072475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.072513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.072728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.072764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.072987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.073000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.073205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.073216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.073402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.073414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.073635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.073668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.073860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.073892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.074168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.074205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.074421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.074455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.074583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.074617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.074798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.074846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.074924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.074936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.075172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.075186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.075287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.075298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.075506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.075517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.075668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.075679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.075911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.075924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.076186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.076199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.076363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.076375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.076642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.085 [2024-11-27 08:10:23.076654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.085 qpair failed and we were unable to recover it. 00:27:29.085 [2024-11-27 08:10:23.076819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.076829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.077032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.077045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.077202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.077213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.077319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.077330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.077428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.077439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.077599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.077610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.077775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.077787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.077882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.077893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.078123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.078136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.078214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.078225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.078398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.078409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.078550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.078561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.078711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.078722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.078821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.078832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.078934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.078951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.079099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.079112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.079272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.079285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.079457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.079489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.079677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.079709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.079838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.079875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.080094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.080130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.080360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.080397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.080600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.080642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.080835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.080848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.081001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.081014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.081182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.081193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.081393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.081426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.081563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.081595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.081721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.081753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.081994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.082009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.082156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.082169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.082319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.082330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.086 [2024-11-27 08:10:23.082416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.086 [2024-11-27 08:10:23.082428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.086 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.082567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.082578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.082659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.082669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.082821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.082832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.082980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.082992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.083088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.083098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.083188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.083199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.083383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.083394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.083489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.083500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.083584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.083594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.083755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.083767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.083939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.083955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.084043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.084053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.084121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.084132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.084280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.084291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.084370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.084379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.084584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.084595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.084797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.084813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.084893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.084905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.084991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.085002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.085100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.085110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.085189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.085202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.085302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.085312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.085467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.085478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.085646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.085658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.085753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.085764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.085852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.085863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.085939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.085956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.086098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.086111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.086181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.086192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.086365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.086377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.086454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.086466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.086566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.086576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.086714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.086725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.086888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.086899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.086996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.087008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.087086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.087097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.087248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.087 [2024-11-27 08:10:23.087260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.087 qpair failed and we were unable to recover it. 00:27:29.087 [2024-11-27 08:10:23.087334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.087350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.087626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.087639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.087707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.087717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.087983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.087995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.088247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.088259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.088352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.088363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.088520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.088531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.088730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.088742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.088827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.088838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.088996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.089009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.089163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.089175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.089352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.089363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.089533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.089544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.089654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.089665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.089917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.089930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.090086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.090099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.090268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.090279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.090445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.090457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.090671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.090683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.090822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.090833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.091029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.091041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.091180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.091192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.091423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.091435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.091535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.091545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.091684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.091696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.091872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.091885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.092115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.092128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.092361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.092374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.092463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.092474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.092678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.092689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.092871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.092883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.093096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.093109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.093266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.093277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.093503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.093515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.093750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.093761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.094015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.094028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.094166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.094178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.094428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.088 [2024-11-27 08:10:23.094439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.088 qpair failed and we were unable to recover it. 00:27:29.088 [2024-11-27 08:10:23.094597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.094609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.094751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.094764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.094984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.095000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.095108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.095121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.095290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.095302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.095508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.095520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.095676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.095687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.095892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.095904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.096057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.096069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.096154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.096164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.096327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.096339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.096429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.096439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.096628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.096659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.096913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.097007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.097249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.097293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.097548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.097585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.097864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.097877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.097983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.097994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.098138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.098150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.098378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.098391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.098496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.098508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.098668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.098680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.098896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.098909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.099080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.099127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.099393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.099434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.099683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.099719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.099969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.100003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.100132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.100167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.100418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.100450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.100771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.100809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.101013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.101032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.101252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.101268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.101437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.101453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.101704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.101720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.101824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.101839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.101953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.101969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.102114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.089 [2024-11-27 08:10:23.102129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.089 qpair failed and we were unable to recover it. 00:27:29.089 [2024-11-27 08:10:23.102313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.102329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.102546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.102562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.102665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.102681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.102844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.102859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.103042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.103059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.103293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.103314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.103495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.103511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.103742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.103758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.103944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.103966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.104201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.104217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.104374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.104391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.104473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.104487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.104717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.104732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.104994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.105011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.105204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.105220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.105409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.105424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.105581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.105596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.105694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.105708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.105920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.105934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.106047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.106062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.106215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.106230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.106370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.106385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.106551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.106566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.106681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.106696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.106937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.106957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.107112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.107127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.107230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.107244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.107347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.107362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.107523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.107538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.107771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.107787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.107891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.107907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.108154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.108170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.108381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.108397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.108561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.108573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.090 [2024-11-27 08:10:23.108789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.090 [2024-11-27 08:10:23.108801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.090 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.109014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.109026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.109174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.109185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.109365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.109377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.109548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.109561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.109660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.109670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.109871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.109884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.110035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.110048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.110127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.110137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.110315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.110325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.110460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.110473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.110645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.110660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.110866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.110877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.111121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.111134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.111283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.111295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.111510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.111521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.111761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.111773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.111933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.111944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.112104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.112115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.112269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.112280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.112529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.112541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.112701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.112713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.112892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.112904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.113110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.113123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.113336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.113347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.113562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.113574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.113743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.113755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.113969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.113982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.114187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.114198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.114426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.114438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.114663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.114675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.114852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.114864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.115022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.115034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.115182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.115193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.115329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.115339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.115474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.115485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.115633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.115650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.091 qpair failed and we were unable to recover it. 00:27:29.091 [2024-11-27 08:10:23.115806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.091 [2024-11-27 08:10:23.115818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.115980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.115997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.116229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.116245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.116411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.116427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.116661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.116677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.116854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.116868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.117105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.117120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.117294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.117308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.117473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.117489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.117730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.117745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.117868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.117884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.117992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.118018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.118257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.118272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.118514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.118530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.118779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.118797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.119018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.119035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.119143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.119157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.119311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.119326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.119615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.119630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.119715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.119729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.119958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.119973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.120079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.120094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.120342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.120357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.120467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.120482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.120646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.120661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.120894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.120909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.121141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.121157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.121329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.121344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.121581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.121597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.121773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.121787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.121945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.121967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.122180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.122196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.122409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.122425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.122602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.122618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.122803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.122819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.123056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.123072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.123230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.123246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.123357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.123372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.092 [2024-11-27 08:10:23.123551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.092 [2024-11-27 08:10:23.123567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.092 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.123740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.123754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.123909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.123925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.124174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.124188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.124357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.124368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.124545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.124557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.124716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.124728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.124922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.124933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.125082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.125093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.125307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.125318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.125468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.125479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.125572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.125583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.125748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.125759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.125985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.125998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.126139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.126152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.126312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.126323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.126405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.126416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.126653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.126664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.126827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.126855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.127090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.127103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.127284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.127295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.127446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.127457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.127608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.127619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.127849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.127861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.128085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.128097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.128250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.128262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.128427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.128438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.128658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.128670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.128875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.128887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.129077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.129090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.129189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.129199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.129302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.129313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.129469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.129481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.129704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.129716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.129861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.129871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.130024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.130036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.130197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.130208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.130276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.130287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.093 [2024-11-27 08:10:23.130493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.093 [2024-11-27 08:10:23.130504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.093 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.130671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.130682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.130779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.130789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.130994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.131007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.131109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.131120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.131371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.131384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.131596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.131607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.131853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.131866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.132020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.132032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.132255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.132266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.132503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.132514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.132677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.132688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.132914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.132926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.133062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.133074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.133322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.133336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.133559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.133571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.133773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.133784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.133984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.133996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.134176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.134188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.134401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.134412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.134556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.134567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.134709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.134720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.134959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.134974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.135173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.135185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.135411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.135424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.135587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.135598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.135824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.135836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.135990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.136003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.136098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.136108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.136278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.136289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.136464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.136474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.136692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.136703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.136873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.136884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.137033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.137045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.137186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.137197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.137404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.137415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.137546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.137557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.137708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.137719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.094 [2024-11-27 08:10:23.137954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.094 [2024-11-27 08:10:23.137966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.094 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.138108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.138119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.138341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.138353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.138522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.138533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.138668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.138678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.138879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.138890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.139121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.139136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.139353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.139367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.139541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.139551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.139634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.139645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.139846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.139859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.139969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.139985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.140143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.140154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.140302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.140313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.140477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.140488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.140774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.140786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.140959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.140970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.141155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.141167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.141326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.141337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.141470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.141482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.141663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.141674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.141779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.141790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.141890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.141904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.142041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.142052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.142230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.142241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.142377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.142387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.142578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.142591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.142800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.142812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.142987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.142999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.143234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.143245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.143322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.143331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.143507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.143520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.143675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.143686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.143829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.143840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.095 [2024-11-27 08:10:23.144019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.095 [2024-11-27 08:10:23.144031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.095 qpair failed and we were unable to recover it. 00:27:29.096 [2024-11-27 08:10:23.144236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.096 [2024-11-27 08:10:23.144248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.096 qpair failed and we were unable to recover it. 00:27:29.096 [2024-11-27 08:10:23.144419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.096 [2024-11-27 08:10:23.144429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.096 qpair failed and we were unable to recover it. 00:27:29.096 [2024-11-27 08:10:23.144631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.096 [2024-11-27 08:10:23.144642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.096 qpair failed and we were unable to recover it. 00:27:29.096 [2024-11-27 08:10:23.144831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.096 [2024-11-27 08:10:23.144843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.096 qpair failed and we were unable to recover it. 00:27:29.096 [2024-11-27 08:10:23.145057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.096 [2024-11-27 08:10:23.145073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.096 qpair failed and we were unable to recover it. 00:27:29.096 [2024-11-27 08:10:23.145291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.096 [2024-11-27 08:10:23.145302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.096 qpair failed and we were unable to recover it. 00:27:29.096 [2024-11-27 08:10:23.145502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.096 [2024-11-27 08:10:23.145513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.096 qpair failed and we were unable to recover it. 00:27:29.096 [2024-11-27 08:10:23.145716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.096 [2024-11-27 08:10:23.145727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.096 qpair failed and we were unable to recover it. 00:27:29.096 [2024-11-27 08:10:23.145953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.096 [2024-11-27 08:10:23.145965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.096 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.146167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.146179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.146380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.146391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.146631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.146642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.146890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.146905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.147107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.147119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.147275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.147285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.147536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.147546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.147798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.147810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.147971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.147983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.148114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.148125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.148349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.148360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.148504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.148515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.148676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.148689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.148784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.148794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.149007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.149019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.149243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.149255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.149407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.149418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.149657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.149668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.149816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.149827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.149895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.149905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.150089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.150102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.150203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.150215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.150316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.150328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.150560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.150571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.150726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.150737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.150986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.150998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.151236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.151247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.151476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.151487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.151638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.151653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.151723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.381 [2024-11-27 08:10:23.151733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.381 qpair failed and we were unable to recover it. 00:27:29.381 [2024-11-27 08:10:23.151913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.151925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.152011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.152021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.152203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.152214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.152427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.152439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.152635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.152646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.152864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.152876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.152967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.152978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.153137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.153148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.153291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.153303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.153518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.153529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.153678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.153689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.153840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.153851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.154023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.154036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.154242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.154256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.154471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.154482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.154627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.154638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.154880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.154891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.155097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.155108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.155333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.155344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.155560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.155571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.155802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.155813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.155983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.155995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.156233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.156244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.156347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.156356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.156537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.156549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.156711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.156722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.156864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.156875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.157050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.157063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.157172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.157183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.157340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.157353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.157564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.157576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.157799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.157810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.158034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.158047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.158195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.158207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.158359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.158370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.158522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.158534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.158754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.158765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.382 [2024-11-27 08:10:23.158991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.382 [2024-11-27 08:10:23.159004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.382 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.159256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.159268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.159470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.159481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.159660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.159672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.159835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.159846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.159985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.159996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.160076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.160086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.160166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.160176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.160321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.160331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.160561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.160573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.160808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.160819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.160979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.160991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.161192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.161203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.161363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.161375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.161532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.161543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.161756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.161767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.161879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.161893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.162040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.162051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.162146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.162156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.162301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.162313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.162456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.162468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.162618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.162629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.162849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.162859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.163001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.163013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.163184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.163195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.163369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.163381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.163609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.163621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.163770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.163782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.163941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.163957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.164178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.164192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.164412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.164424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.164575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.164587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.164675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.164687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.164837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.164848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.165074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.165088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.165189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.165200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.165424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.165434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.165521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.383 [2024-11-27 08:10:23.165531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.383 qpair failed and we were unable to recover it. 00:27:29.383 [2024-11-27 08:10:23.165707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.165718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.165942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.165959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.166054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.166065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.166211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.166222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.166446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.166457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.166541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.166550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.166763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.166775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.166985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.166997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.167290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.167302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.167411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.167423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.167649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.167660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.167812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.167823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.168024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.168037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.168274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.168284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.168491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.168502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.168728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.168739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.168965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.168977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.169231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.169243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.169470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.169485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.169655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.169666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.169869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.169880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.170049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.170061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.170284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.170297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.170544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.170555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.170659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.170670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.170930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.170941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.171150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.171162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.171363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.171376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.171578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.171590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.171794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.171805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.171982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.171996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.172134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.172146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.172324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.172335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.172496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.172507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.172701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.172712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.172957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.172970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.173172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.173184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.384 [2024-11-27 08:10:23.173328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.384 [2024-11-27 08:10:23.173340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.384 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.173512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.173524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.173688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.173698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.173796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.173812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.174062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.174073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.174296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.174307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.174534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.174545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.174636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.174650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.174813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.174824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.175048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.175061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.175210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.175221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.175388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.175399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.175549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.175562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.175784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.175795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.176022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.176033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.176239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.176250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.176476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.176488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.176636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.176647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.176809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.176821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.177025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.177037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.177266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.177278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.177366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.177379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.177530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.177541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.177675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.177686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.177909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.177920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.178130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.178141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.178306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.178318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.178412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.178422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.178671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.178683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.178909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.178920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.179172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.179185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.179323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.385 [2024-11-27 08:10:23.179334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.385 qpair failed and we were unable to recover it. 00:27:29.385 [2024-11-27 08:10:23.179533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.179544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.179718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.179730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.179955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.179967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.180119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.180129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.180243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.180254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.180400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.180412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.180610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.180621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.180707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.180718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.180921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.180933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.181107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.181119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.181272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.181283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.181429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.181440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.181615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.181626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.181720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.181730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.181876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.181887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.182059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.182071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.182213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.182225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.182474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.182486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.182635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.182646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.182794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.182805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.183028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.183040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.183207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.183220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.183424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.183435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.183658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.183669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.183826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.183837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.184100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.184113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.184268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.184280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.184421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.184432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.184528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.184538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.184671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.184685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.184936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.184953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.185118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.185130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.185333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.185345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.185563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.185575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.185792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.185804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.185901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.185911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.186125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.386 [2024-11-27 08:10:23.186136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.386 qpair failed and we were unable to recover it. 00:27:29.386 [2024-11-27 08:10:23.186313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.186325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.186407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.186418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.186618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.186630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.186802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.186814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.187030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.187042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.187248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.187259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.187414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.187425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.187658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.187670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.187789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.187800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.188031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.188043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.188244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.188255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.188391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.188403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.188624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.188635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.188861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.188872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.189082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.189093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.189258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.189271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.189491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.189502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.189712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.189724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.189881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.189893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.190066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.190102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.190349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.190367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.190531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.190547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.190800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.190815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.190972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.190989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.191217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.191232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.191399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.191414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.191672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.191688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.191897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.191913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.192122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.192139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.192299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.192316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.192523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.192539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.192748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.192763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.192926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.192943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.193101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.193117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.193377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.193392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.193626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.193642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.193804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.193820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.387 qpair failed and we were unable to recover it. 00:27:29.387 [2024-11-27 08:10:23.193971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.387 [2024-11-27 08:10:23.193986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.194219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.194234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.194466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.194483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.194636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.194652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.194873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.194889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.195041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.195057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.195290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.195305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.195538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.195553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.195722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.195737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.195969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.195984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.196084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.196098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.196242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.196257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.196411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.196436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.196647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.196663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.196845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.196861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.197027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.197043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.197207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.197222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.197446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.197461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.197690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.197706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.197917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.197932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.198095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.198111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.198321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.198335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.198573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.198588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.198854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.198866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.199018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.199031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.199260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.199271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.199508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.199522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.199680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.199692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.199863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.199873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.200030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.200042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.200231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.200244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.200484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.200496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.200723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.200735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.200967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.200979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.201247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.201258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.201410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.201424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.201585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.201596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.201819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.388 [2024-11-27 08:10:23.201831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.388 qpair failed and we were unable to recover it. 00:27:29.388 [2024-11-27 08:10:23.201980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.201992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.202221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.202232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.202397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.202409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.202562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.202574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.202786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.202798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.203024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.203037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.203247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.203258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.203492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.203504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.203774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.203785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.203992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.204005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.204158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.204170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.204403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.204414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.204567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.204578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.204724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.204735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.204884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.204895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.205059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.205072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.205208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.205219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.205323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.205335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.205469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.205482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.205635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.205647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.205819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.205830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.205978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.205989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.206253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.206266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.206430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.206442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.206540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.206556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.206738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.206756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.206921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.206937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.207151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.207166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.207380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.207395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.207632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.207648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.207909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.207924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.208144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.208159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.208312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.208328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.208486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.208501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.208713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.208727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.208883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.208898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.209141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.209157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.209388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.389 [2024-11-27 08:10:23.209408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.389 qpair failed and we were unable to recover it. 00:27:29.389 [2024-11-27 08:10:23.209620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.209635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.209900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.209915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.210070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.210085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.210249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.210265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.210516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.210531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.210698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.210713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.210864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.210878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.211070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.211086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.211289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.211304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.211406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.211424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.211631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.211646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.211881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.211896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.212050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.212065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.212295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.212312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.212562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.212576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.212819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.212835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.212987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.213002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.213093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.213107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.213287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.213303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.213548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.213562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.213724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.213740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.213918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.213932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.214105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.214120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.214268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.214280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.214482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.214494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.214674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.214685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.214833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.214850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.215107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.215123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.215282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.215297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.215450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.215464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.215548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.215562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.215778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.390 [2024-11-27 08:10:23.215792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-11-27 08:10:23.215960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.215976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.216217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.216232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.216416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.216430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.216571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.216586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.216759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.216774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.216883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.216897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.217074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.217089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.217256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.217274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.217529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.217544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.217787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.217803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.218011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.218026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.218187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.218201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.218314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.218329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.218562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.218576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.218788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.218802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.219053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.219070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.219244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.219258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.219360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.219375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.219472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.219487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.219630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.219645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.219787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.219803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.220010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.220026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.220165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.220180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.220409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.220424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.220632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.220648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.220878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.220893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.220988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.221003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.221162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.221177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.221444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.221459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.221687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.221702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.221868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.221883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.222039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.222063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.222233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.222248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.222397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.222413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.222675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.222690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.222885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.222896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.222978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.222989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.223171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.391 [2024-11-27 08:10:23.223183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-11-27 08:10:23.223387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.223399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.223620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.223631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.223780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.223791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.224017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.224030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.224233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.224243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.224409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.224421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.224667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.224678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.224829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.224841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.225043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.225055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.225287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.225298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.225442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.225453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.225619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.225631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.225784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.225795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.225946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.225970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.226171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.226182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.226364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.226376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.226515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.226527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.226670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.226681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.226916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.226927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.227075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.227087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.227237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.227249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.227476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.227487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.227660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.227671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.227821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.227832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.228050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.228064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.228162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.228172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.228416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.228427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.228574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.228586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.228790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.228801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.228893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.228905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.229130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.229141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.229342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.229353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.229535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.229546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.229692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.229703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.229901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.229914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.230168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.230180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.230267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.230279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.230527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.392 [2024-11-27 08:10:23.230537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-11-27 08:10:23.230683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.230694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.230872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.230884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.230956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.230966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.231111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.231122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.231293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.231305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.231386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.231396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.231460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.231470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.231623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.231634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.231741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.231752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.231923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.231934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.232113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.232126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.232270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.232281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.232434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.232445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.232644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.232657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.232859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.232870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.233026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.233039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.233224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.233234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.233435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.233447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.233634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.233645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.233858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.233870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.234022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.234035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.234236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.234248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.234487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.234499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.234652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.234663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.234829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.234840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.235042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.235056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.235148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.235158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.235257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.235268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.235411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.235422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.235565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.235576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.235814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.235826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.236044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.236055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.236208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.236219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.236360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.236371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.236460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.236471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.236555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.236566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.236741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.236752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.236927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.393 [2024-11-27 08:10:23.236937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.393 qpair failed and we were unable to recover it. 00:27:29.393 [2024-11-27 08:10:23.237170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.237192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.237413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.237428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.237664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.237679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.237848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.237863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.238083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.238099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.238262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.238277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.238428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.238443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.238693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.238708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.238854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.238869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.239028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.239043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.239304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.239319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.239491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.239506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.239757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.239772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.240003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.240020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.240132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.240148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.240357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.240372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.240633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.240648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.240827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.240842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.241076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.241092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.241327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.241342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.241515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.241531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.241751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.241769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.241977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.241997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.242104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.242120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.242221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.242237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.242337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.242352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.242524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.242540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.242775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.242790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.242957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.242973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.243207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.243222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.243385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.243400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.243571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.243588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.243826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.243841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.244088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.244104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.244263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.244278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.244510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.394 [2024-11-27 08:10:23.244527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.394 qpair failed and we were unable to recover it. 00:27:29.394 [2024-11-27 08:10:23.244687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.244702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.244913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.244929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.245090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.245106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.245261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.245277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.245532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.245551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.245781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.245797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.245956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.245971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.246204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.246219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.246448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.246463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.246673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.246688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.246866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.246881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.247134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.247150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.247330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.247346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.247572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.247588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.247747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.247763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.247868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.247883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.248034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.248051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.248241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.248256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.248426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.248441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.248662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.248677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.248835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.248851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.248956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.248972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.249170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.249186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.249331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.249346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.249566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.249581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.249801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.249817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.250064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.250080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.250229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.250245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.250356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.250372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.250536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.250551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.250749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.250764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.250872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.250887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.251038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.251055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.251321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.251337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.251512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.251527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.395 qpair failed and we were unable to recover it. 00:27:29.395 [2024-11-27 08:10:23.251743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.395 [2024-11-27 08:10:23.251759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.251916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.251932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.252103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.252120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.252332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.252348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.252508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.252524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.252667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.252682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.252859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.252874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.252959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.252973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.253196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.253211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.253482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.253501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.253660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.253676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.253863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.253878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.254112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.254127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.254291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.254306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.254489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.254504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.254659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.254674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.254911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.254926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.255089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.255105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.255337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.255352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.255458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.255473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.255624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.255638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.255794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.255810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.256020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.256035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.256255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.256270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.256384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.256398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.256635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.256650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.256811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.256827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.256989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.257004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.257164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.257180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.257422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.257437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.257592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.257608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.257865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.257880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.258115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.258131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.258327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.258342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.258503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.258519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.258609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.258623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.258858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.258873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.259098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.396 [2024-11-27 08:10:23.259113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.396 qpair failed and we were unable to recover it. 00:27:29.396 [2024-11-27 08:10:23.259272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.259288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.259479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.259494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.259593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.259607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.259748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.259763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.259973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.259989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.260198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.260213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.260377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.260393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.260568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.260584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.260762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.260777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.260938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.260958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.261167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.261182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.261388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.261406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.261618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.261633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.261842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.261857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.262095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.262111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.262278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.262293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.262502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.262517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.262679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.262695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.262850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.262865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.262972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.262988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.263156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.263172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.263320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.263334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.263509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.263524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.263736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.263751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.264013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.264028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.264264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.264280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.264356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.264370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.264581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.264596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.264774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.264790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.264878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.264893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.265123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.265139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.265240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.265255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.265410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.265425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.265535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.265550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.265710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.265727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.265822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.265836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.397 [2024-11-27 08:10:23.266006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.397 [2024-11-27 08:10:23.266023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.397 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.266285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.266300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.266491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.266506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.266578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.266592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.266775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.266790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.266951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.266966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.267194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.267210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.267445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.267460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.267623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.267639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.267868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.267883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.268045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.268061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.268318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.268333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.268487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.268502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.268694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.268710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.268961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.268977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.269152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.269170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.269330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.269345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.269506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.269522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.269680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.269696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.269959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.269974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.270073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.270087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.270258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.270274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.270447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.270462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.270673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.270689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.270847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.270862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.270958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.270973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.271184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.271200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.271410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.271425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.271587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.271602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.271813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.271828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.272057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.272074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.272247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.272263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.272472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.272487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.272702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.272717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.272877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.272892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.273136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.273151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.273311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.273327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.273509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.273524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.398 [2024-11-27 08:10:23.273689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.398 [2024-11-27 08:10:23.273704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.398 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.273866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.273882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.274071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.274088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.274263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.274278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.274365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.274380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.274533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.274548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.274781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.274797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.274957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.274973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.275211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.275225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.275381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.275396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.275606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.275622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.275845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.275859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.276068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.276085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.276240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.276256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.276481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.276497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.276721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.276737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.276880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.276894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.277072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.277093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.277239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.277254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.277488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.277503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.277654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.277670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.277903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.277918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.278067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.278084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.278228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.278242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.278480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.278496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.278742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.278756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.279005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.279021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.279203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.279219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.279430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.279446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.279610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.279626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.279834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.279850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.279956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.279972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.399 [2024-11-27 08:10:23.280181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.399 [2024-11-27 08:10:23.280196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.399 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.280371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.280386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.280553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.280569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.280779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.280795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.280969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.280984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.281150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.281165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.281316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.281332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.281605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.281621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.281785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.281801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.281893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.281907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.282106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.282122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.282230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.282245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.282452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.282488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.282726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.282742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.282969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.282981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.283116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.283128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.283374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.283386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.283538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.283549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.283756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.283768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.283871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.283882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.284017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.284029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.284279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.284290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.284457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.284469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.284615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.284625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.284775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.284785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.284930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.284944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.285033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.285043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.285199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.285213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.285380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.285393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.285470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.285480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.285643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.285654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.285752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.285762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.285927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.285939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.286103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.286115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.286316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.286327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.286558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.286569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.286658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.286668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.286870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.286884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.400 [2024-11-27 08:10:23.287030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.400 [2024-11-27 08:10:23.287043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.400 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.287265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.287277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.287362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.287371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.287518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.287530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.287685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.287696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.287888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.287901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.288057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.288069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.288273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.288285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.288427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.288439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.288672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.288683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.288847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.288858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.288993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.289005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.289204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.289215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.289375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.289387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.289553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.289571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.289716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.289731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.289809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.289823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.289984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.289999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.290277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.290293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.290518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.290533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.290749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.290765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.290924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.290940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.291058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.291074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.291269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.291285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.291494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.291509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.291611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.291627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.291784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.291800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.292009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.292028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.292267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.292282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.292442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.292457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.292572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.292587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.292677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.292691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.292835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.292851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.293033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.293048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.293310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.293325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.293505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.293521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.293683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.293698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.293868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.293883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.401 [2024-11-27 08:10:23.294073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.401 [2024-11-27 08:10:23.294090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.401 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.294346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.294362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.294516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.294531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.294771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.294787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.295022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.295037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.295190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.295206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.295381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.295396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.295566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.295582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.295794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.295810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.295903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.295917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.296074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.296089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.296327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.296342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.296500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.296515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.296749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.296764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.296977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.296992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.297148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.297164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.297314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.297329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.297475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.297486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.297637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.297648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.297801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.297813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.298041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.298053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.298248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.298259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.298493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.298504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.298668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.298679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.298902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.298915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.299142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.299155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.299309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.299322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.299469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.299480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.299626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.299637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.299801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.299815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.299987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.299999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.300222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.300234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.300435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.300447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.300661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.300673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.300904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.300917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.301119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.301132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.301227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.301237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.301376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.301388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.301638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.402 [2024-11-27 08:10:23.301650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.402 qpair failed and we were unable to recover it. 00:27:29.402 [2024-11-27 08:10:23.301770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.301784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.302018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.302030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.302122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.302131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.302264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.302275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.302429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.302441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.302616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.302627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.302864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.302875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.303077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.303089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.303247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.303259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.303401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.303412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.303548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.303559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.303708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.303722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.303956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.303968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.304222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.304234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.304381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.304393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.304531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.304544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.304779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.304790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.305001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.305018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.305107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.305121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.305348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.305364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.305457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.305471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.305711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.305726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.305977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.305993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.306167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.306182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.306371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.306386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.306550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.306565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.306722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.306737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.306970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.306986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.307127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.307142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.307349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.307364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.307548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.307565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.307659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.307672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.307908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.307924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.308167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.308184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.308376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.308392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.308572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.308587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.308680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.308695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.308846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.403 [2024-11-27 08:10:23.308862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.403 qpair failed and we were unable to recover it. 00:27:29.403 [2024-11-27 08:10:23.309031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.309047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.309127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.309142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.309309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.309324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.309563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.309578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.309737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.309753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.309935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.309962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.310218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.310234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.310446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.310461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.310670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.310686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.310920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.310935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.311181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.311197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.311431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.311447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.311629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.311645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.311850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.311865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.311959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.311975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.312224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.312239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.312393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.312409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.312562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.312577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.312742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.312757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.312968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.312985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.313246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.313257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.313456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.313467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.313618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.313630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.313872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.313884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.313979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.313991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.314164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.314175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.314386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.314397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.314610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.314621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.314847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.314859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.315040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.315054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.315203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.315214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.315361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.315372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.315463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.315479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.315708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.404 [2024-11-27 08:10:23.315719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.404 qpair failed and we were unable to recover it. 00:27:29.404 [2024-11-27 08:10:23.315807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.315817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.315995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.316007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.316248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.316264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.316444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.316454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.316625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.316635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.316785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.316795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.317046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.317058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.317279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.317291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.317439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.317450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.317593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.317604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.317844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.317856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.317997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.318011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.318158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.318169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.318372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.318385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.318605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.318617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.318930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.318943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.319101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.319112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.319345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.319356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.319560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.319572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.319798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.319810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.319959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.319970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.320191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.320202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.320377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.320390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.320475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.320486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.320684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.320695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.320835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.320850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.320920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.320930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.321023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.321035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.321263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.321275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.321414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.321426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.321616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.321627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.321841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.321852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.322055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.322068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.322148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.322158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.322370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.322381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.322596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.322608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.322829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.405 [2024-11-27 08:10:23.322842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.405 qpair failed and we were unable to recover it. 00:27:29.405 [2024-11-27 08:10:23.322998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.323011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.323213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.323227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.323373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.323384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.323602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.323615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.323778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.323790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.323924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.323935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.324023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.324033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.324169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.324180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.324323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.324334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.324427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.324439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.324591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.324601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.324803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.324814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.324983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.324995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.325151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.325162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.325256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.325267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.325510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.325522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.325688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.325700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.325897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.325909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.326064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.326079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.326223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.326234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.326442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.326452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.326600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.326611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.326837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.326848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.327072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.327085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.327237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.327249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.327468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.327479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.327703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.327716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.327866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.327878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.328045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.328060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.328308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.328318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.328519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.328531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.328694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.328705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.328861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.328873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.329122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.329135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.329288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.329300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.329389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.329399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.329567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.329578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.329724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.406 [2024-11-27 08:10:23.329735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.406 qpair failed and we were unable to recover it. 00:27:29.406 [2024-11-27 08:10:23.329884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.329895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.330047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.330058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.330140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.330150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.330285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.330296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.330532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.330543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.330697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.330710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.330881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.330892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.331101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.331114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.331243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.331254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.331457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.331470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.331687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.331699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.331873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.331885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.332108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.332122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.332267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.332278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.332448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.332459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.332658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.332670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.332755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.332765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.332979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.332990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.333212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.333223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.333448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.333460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.333609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.333620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.333764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.333775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.333972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.333983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.334170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.334182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.334356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.334368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.334594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.334605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.334803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.334815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.334959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.334972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.335063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.335073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.335222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.335234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.335370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.335383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.335546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.335558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.335715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.335726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.335824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.335834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.336007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.336018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.336165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.336178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.336315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.336327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.336510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.336521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.407 qpair failed and we were unable to recover it. 00:27:29.407 [2024-11-27 08:10:23.336664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.407 [2024-11-27 08:10:23.336675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.336832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.336845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.337053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.337066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.337295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.337307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.337446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.337457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.337613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.337626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.337771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.337781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.337996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.338008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.338177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.338188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.338416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.338428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.338572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.338583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.338770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.338780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.338944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.338968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.339221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.339232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.339450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.339460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.339721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.339733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.339878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.339890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.340124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.340136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.340275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.340286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.340435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.340446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.340645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.340657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.340727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.340738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.340937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.340952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.341104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.341117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.341281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.341292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.341430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.341442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.341589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.341600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.341749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.341761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.341912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.341924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.342159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.342172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.342304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.342315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.342454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.342465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.342619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.342633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.408 [2024-11-27 08:10:23.342835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.408 [2024-11-27 08:10:23.342845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.408 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.342925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.342935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.343097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.343109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.343201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.343213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.343412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.343424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.343646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.343657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.343809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.343820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.343912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.343923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.344057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.344070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.344277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.344288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.344469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.344480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.344619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.344630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.344793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.344805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.344888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.344899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.345147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.345159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.345308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.345319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.345543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.345556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.345755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.345767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.345990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.346003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.346203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.346214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.346438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.346450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.346747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.346759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.346907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.346918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.347071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.347083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.347232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.347243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.347474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.347485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.347726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.347738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.347905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.347916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.348061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.348073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.348298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.348309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.348457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.348468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.348694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.348707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.348794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.348804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.348956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.348967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.349111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.349122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.349333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.349344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.349448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.349460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.349551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.409 [2024-11-27 08:10:23.349560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.409 qpair failed and we were unable to recover it. 00:27:29.409 [2024-11-27 08:10:23.349706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.349717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.349859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.349874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.350025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.350038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.350251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.350265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.350487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.350498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.350717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.350728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.351004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.351016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.351224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.351236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.351371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.351382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.351627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.351639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.351888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.351900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.352067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.352080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.352178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.352190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.352419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.352431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.352652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.352664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.352814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.352825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.352968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.352979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.353219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.353231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.353446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.353458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.353679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.353690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.353848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.353859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.354078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.354090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.354308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.354318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.354461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.354473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.354629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.354642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.354870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.354881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.355033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.355045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.355257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.355268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.355414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.355425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.355577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.355588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.355809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.355820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.356150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.356163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.356428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.356439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.356571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.356582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.356800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.356812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.356903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.356913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.357010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.357023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.357243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.357254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.410 qpair failed and we were unable to recover it. 00:27:29.410 [2024-11-27 08:10:23.357476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.410 [2024-11-27 08:10:23.357489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.357648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.357659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.357889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.357900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.358106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.358121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.358294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.358305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.358435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.358446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.358628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.358639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.358789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.358802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.359005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.359017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.359248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.359259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.359396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.359408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.359667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.359679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.359827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.359838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.360064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.360076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.360211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.360223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.360307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.360318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.360394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.360403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.360557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.360568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.360732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.360744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.360971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.360984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.361082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.361093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.361310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.361321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.361563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.361575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.361747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.361757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.361911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.361922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.362146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.362160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.362225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.362236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.362480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.362491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.362704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.362715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.362888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.362899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.363096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.363108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.363335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.363346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.363550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.363563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.363794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.363804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.364009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.364022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.364228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.364241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.364402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.364414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.364644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.364656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.411 qpair failed and we were unable to recover it. 00:27:29.411 [2024-11-27 08:10:23.364801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.411 [2024-11-27 08:10:23.364812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.365052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.365065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.365150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.365160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.365430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.365442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.365585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.365597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.365736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.365753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.365977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.365989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.366131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.366142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.366276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.366287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.366496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.366507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.366775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.366788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.366988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.367000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.367143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.367155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.367304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.367315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.367485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.367497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.367696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.367709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.367799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.367809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.367974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.367987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.368132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.368144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.368293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.368305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.368509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.368524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.368670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.368681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.368825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.368837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.368971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.368982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.369138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.369149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.369300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.369311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.369413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.369425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.369624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.369636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.369851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.369861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.370131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.370143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.370311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.370324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.370547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.370559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.370790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.370802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.371049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.371061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.371152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.371162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.371375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.371386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.371588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.371600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.371810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.412 [2024-11-27 08:10:23.371822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.412 qpair failed and we were unable to recover it. 00:27:29.412 [2024-11-27 08:10:23.372034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.372047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.372198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.372211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.372437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.372448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.372611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.372623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.372760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.372771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.372975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.372988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.373073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.373085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.373261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.373276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.373340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.373350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.373554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.373566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.373825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.373836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.374031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.374043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.374201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.374213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.374438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.374450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.374586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.374597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.374827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.374839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.374977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.374989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.375197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.375211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.375452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.375463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.375718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.375729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.375866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.375878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.376083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.376095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.376265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.376276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.376508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.376520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.376771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.376783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.376987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.376999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.377250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.377264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.377527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.377538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.377688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.377699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.377850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.377861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.378014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.378025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.378238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.378251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.378384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.378396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.378540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.413 [2024-11-27 08:10:23.378551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.413 qpair failed and we were unable to recover it. 00:27:29.413 [2024-11-27 08:10:23.378713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.378724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.378802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.378813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.378946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.378963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.379116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.379129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.379342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.379353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.379432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.379444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.379606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.379617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.379708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.379718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.379866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.379878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.380094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.380106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.380334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.380345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.380491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.380502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.380641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.380654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.380858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.380872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.381026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.381038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.381206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.381217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.381377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.381388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.381532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.381543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.381715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.381726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.381861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.381872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.381966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.381978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.382132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.382143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.382276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.382290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.382358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.382368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.382528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.382539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.382760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.382772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.382937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.382952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.383179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.383192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.383325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.383337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.383418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.383428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.383596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.383606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.383756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.383766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.383915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.383929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.384011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.384022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.384211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.384222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.384363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.384374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.384610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.384622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.384862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.414 [2024-11-27 08:10:23.384875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.414 qpair failed and we were unable to recover it. 00:27:29.414 [2024-11-27 08:10:23.385032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.385044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.385190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.385201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.385458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.385470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.385723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.385736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.385887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.385898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.386123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.386136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.386203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.386213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.386427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.386438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.386522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.386534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.386752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.386764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.386919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.386930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.387146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.387159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.387293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.387304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.387458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.387469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.387688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.387700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.387856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.387870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.387966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.387977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.388137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.388148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.388235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.388247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.388468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.388479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.388635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.388647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.388816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.388828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.389065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.389078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.389329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.389341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.389564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.389575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.389804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.389815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.389976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.389989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.390153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.390164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.390295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.390307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.390460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.390473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.390621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.390633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.390832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.390843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.390994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.391007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.391154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.391166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.391376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.391388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.391462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.391473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.391623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.391634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.415 [2024-11-27 08:10:23.391857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.415 [2024-11-27 08:10:23.391868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.415 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.392091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.392105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.392358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.392370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.392540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.392551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.392779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.392790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.393014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.393027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.393204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.393215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.393445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.393456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.393662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.393673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.393833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.393845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.394047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.394060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.394208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.394220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.394422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.394438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.394641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.394652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.394805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.394817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.395021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.395034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.395278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.395289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.395488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.395500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.395698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.395713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.395963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.395976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.396130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.396145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.396352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.396364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.396445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.396456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.396694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.396706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.396907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.396919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.397077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.397090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.397342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.397356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.397556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.397567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.397734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.397747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.397879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.397891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.398043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.398058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.398286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.398300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.398514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.398528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.398733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.398748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.398956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.398970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.399067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.399078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.399247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.399261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.399413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.399425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.399604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.416 [2024-11-27 08:10:23.399617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.416 qpair failed and we were unable to recover it. 00:27:29.416 [2024-11-27 08:10:23.399768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.399782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.399918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.399931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.400029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.400042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.400205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.400219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.400413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.400426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.400579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.400593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.400778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.400790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.401061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.401075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.401312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.401325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.401537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.401551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.401727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.401740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.401835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.401848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.402072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.402085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.402178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.402190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.402475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.402489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.402698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.402712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.402787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.402799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.403026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.403040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.403174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.403187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.403433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.403449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.403667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.403679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.403887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.403900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.404042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.404057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.404217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.404230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.404453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.404467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.404682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.404694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.404845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.404858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.404998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.405011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.405097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.405108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.405263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.405276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.405425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.405438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.405642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.405655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.417 [2024-11-27 08:10:23.405806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.417 [2024-11-27 08:10:23.405819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.417 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.405966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.405980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.406125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.406139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.406379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.406392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.406576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.406592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.406767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.406780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.407015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.407028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.407252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.407265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.407421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.407434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.407673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.407687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.407881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.407894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.408070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.408084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.408302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.408318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.408520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.408533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.408620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.408633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.408766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.408779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.409011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.409025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.409183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.409197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.409433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.409446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.409563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.409576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.409667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.409679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.409916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.409932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.410165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.410178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.410334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.410348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.410504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.410517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.410652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.410666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.410747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.410759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.411015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.411033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.411180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.411194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.411275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.411288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.411424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.411436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.411640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.411653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.411825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.411839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.412040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.412054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.412271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.412284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.412519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.412536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.412700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.412713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.412803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.418 [2024-11-27 08:10:23.412815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.418 qpair failed and we were unable to recover it. 00:27:29.418 [2024-11-27 08:10:23.413051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.413065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.413296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.413309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.413395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.413408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.413558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.413569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.413669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.413680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.413904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.413918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.414013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.414025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.414263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.414276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.414413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.414426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.414512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.414522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.414771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.414784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.414984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.414998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.415253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.415267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.415510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.415522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.415766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.415780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.415866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.415878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.416084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.416103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.416320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.416335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.416551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.416564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.416716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.416728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.416892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.416905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.417107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.417121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.417271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.417284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.417422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.417434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.417594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.417606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.417751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.417765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.417981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.417995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.418205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.418217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.418377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.418390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.418531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.418546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.418716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.418729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.418863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.418876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.418973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.418985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.419136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.419149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.419325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.419338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.419517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.419530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.419613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.419625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.419828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.419 [2024-11-27 08:10:23.419841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.419 qpair failed and we were unable to recover it. 00:27:29.419 [2024-11-27 08:10:23.419909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.419921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.420145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.420159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.420367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.420380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.420532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.420545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.420773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.420786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.420894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.420910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.421136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.421150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.421402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.421416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.421622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.421635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.421792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.421804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.421909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.421922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.422075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.422090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.422226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.422240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.422409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.422422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.422593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.422606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.422729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.422741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.422908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.422921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.423102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.423117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.423282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.423298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.423380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.423391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.423544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.423558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.423768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.423781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.423928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.423943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.424175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.424187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.424261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.424272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.424491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.424504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.424798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.424812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.425068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.425082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.425227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.425239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.425372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.425385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.425533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.425545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.425697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.425711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.425882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.425896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.426052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.426065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.426224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.426239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.426326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.426337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.426564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.426576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.426710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.426723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.420 qpair failed and we were unable to recover it. 00:27:29.420 [2024-11-27 08:10:23.426896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.420 [2024-11-27 08:10:23.426909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.427111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.427126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.427399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.427412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.427549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.427563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.427702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.427715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.427939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.427958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.428203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.428216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.428455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.428468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.428707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.428721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.428969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.428984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.429146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.429159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.429313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.429327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.429475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.429488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.429690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.429702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.429852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.429864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.430010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.430032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.430237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.430250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.430406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.430420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.430553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.430567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.430715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.430727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.430953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.430969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.431120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.431133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.431210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.431222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.431317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.431328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.431529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.431542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.431768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.431782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.431959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.431972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.432203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.432216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.432283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.432295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.432494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.432506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.432644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.432657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.432810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.432823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.432910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.432921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.432998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.433011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.433099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.433111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.433321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.433334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.433498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.433511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.433644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.433658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.421 qpair failed and we were unable to recover it. 00:27:29.421 [2024-11-27 08:10:23.433864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.421 [2024-11-27 08:10:23.433877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.433975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.433988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.434245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.434258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.434406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.434421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.434558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.434571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.434708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.434720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.434935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.434954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.435044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.435055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.435303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.435319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.435476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.435489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.435606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.435618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.435843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.435855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.436009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.436023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.436150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.436163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.436391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.436404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.436631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.436645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.436716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.436727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.436928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.436941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.437047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.437058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.437229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.437242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.437312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.437323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.437553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.437566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.437812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.437827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.438066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.438080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.438224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.438238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.438475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.438487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.438628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.438640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.438797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.438811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.438979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.438993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.439131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.439144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.439282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.439294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.439444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.439458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.439685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.422 [2024-11-27 08:10:23.439698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.422 qpair failed and we were unable to recover it. 00:27:29.422 [2024-11-27 08:10:23.439781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.439793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.440017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.440031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.440259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.440271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.440424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.440439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.440647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.440660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.440824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.440837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.441052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.441066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.441153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.441165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.441394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.441406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.441605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.441619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.441758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.441771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.442012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.442026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.442178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.442190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.442337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.442349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.442494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.442506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.442577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.442588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.442677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.442689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.442888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.442900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.443123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.443136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.443330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.443342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.443490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.443501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.443598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.443610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.443755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.443767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.444016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.444028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.444253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.444266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.444416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.444429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.444595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.444607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.444829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.444841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.445068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.445087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.445174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.445187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.445346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.445357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.445444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.445455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.445602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.445614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.445776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.445788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.445916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.445928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.446182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.446195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.446449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.446461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.423 [2024-11-27 08:10:23.446562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.423 [2024-11-27 08:10:23.446574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.423 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.446748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.446760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.446906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.446918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.447058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.447072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.447295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.447307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.447400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.447411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.447594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.447607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.447687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.447698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.447925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.447937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.448169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.448181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.448277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.448290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.448496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.448508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.448652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.448664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.448746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.448757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.448925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.448937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.449193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.449205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.449353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.449365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.449461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.449473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.449684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.449696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.449926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.449938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.450210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.450223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.450359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.450371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.450457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.450468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.450667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.450679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.450905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.450918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.451126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.451140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.451284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.451296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.451474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.451486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.451629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.451641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.451779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.451793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.451936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.451954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.452208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.452221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.452378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.452395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.452540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.452551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.452703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.452716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.452944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.452962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.453102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.453115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.453315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.424 [2024-11-27 08:10:23.453327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.424 qpair failed and we were unable to recover it. 00:27:29.424 [2024-11-27 08:10:23.453523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.453538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.453790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.453801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.453945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.453972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.454148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.454160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.454312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.454325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.454551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.454564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.454714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.454726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.454872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.454884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.455022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.455036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.455237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.455250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.455452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.455464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.455616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.455628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.455855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.455867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.456010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.456024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.456254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.456267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.456433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.456446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.456627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.456640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.456869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.456882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.457087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.457100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.457323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.457335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.457498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.457511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.457725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.457739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.457963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.457976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.458153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.458166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.458338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.458351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.458554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.458567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.458714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.458727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.458935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.458955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.459182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.459195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.459417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.459431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.459582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.459596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.459743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.459755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.459933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.459945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.460097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.460110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.460257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.460273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.460426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.460437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.460578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.460591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.460811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.460823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.460968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.460983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.461214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.461226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-11-27 08:10:23.461439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-11-27 08:10:23.461451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.426 [2024-11-27 08:10:23.461622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-11-27 08:10:23.461635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-11-27 08:10:23.461844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-11-27 08:10:23.461857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-11-27 08:10:23.462009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-11-27 08:10:23.462023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-11-27 08:10:23.462255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-11-27 08:10:23.462267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-11-27 08:10:23.462470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-11-27 08:10:23.462483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-11-27 08:10:23.462658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-11-27 08:10:23.462673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-11-27 08:10:23.462883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-11-27 08:10:23.462896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-11-27 08:10:23.463050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-11-27 08:10:23.463063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-11-27 08:10:23.463198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-11-27 08:10:23.463210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-11-27 08:10:23.463342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-11-27 08:10:23.463354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-11-27 08:10:23.463524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-11-27 08:10:23.463536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-11-27 08:10:23.463678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-11-27 08:10:23.463692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.463842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.463856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.464004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.464019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.464236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.464249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.464428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.464441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.464657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.464671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.464823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.464836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.465038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.465051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.465139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.465150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.465395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.465409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.465507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.465519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.465671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.465683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.465885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.465897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.466111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.466124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.466328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.466341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.466488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.466500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.466702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.466715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.466798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.466809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.466909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.466921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.467146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.467160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.467253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.467264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.467485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.467497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.467651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.467666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.467882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.467897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.468069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.468083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.468284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.468297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.468518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.468531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.468642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.704 [2024-11-27 08:10:23.468654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.704 qpair failed and we were unable to recover it. 00:27:29.704 [2024-11-27 08:10:23.468828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.468840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.469008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.469022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.469182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.469194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.469359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.469371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.469455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.469467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.469704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.469718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.469942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.469960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.470055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.470068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.470177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.470190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.470327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.470340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.470572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.470589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.470679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.470691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.470895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.470908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.471131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.471144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.471242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.471255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.471418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.471431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.471586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.471598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.471828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.471840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.471990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.472003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.472169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.472182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.472397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.472411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.472629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.472653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.472916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.472933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.473084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.473100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.473307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.473324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.473500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.473516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.473698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.473715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.473872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.473888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.474094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.474112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.474266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.474282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.705 qpair failed and we were unable to recover it. 00:27:29.705 [2024-11-27 08:10:23.474449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.705 [2024-11-27 08:10:23.474466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.474617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.474634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.474792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.474809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.474969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.474987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.475140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.475165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.475321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.475337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.475587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.475604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.475742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.475758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.475984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.476001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.476159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.476176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.476391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.476408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.476562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.476578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.476734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.476751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.476847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.476862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.477014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.477032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.477185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.477202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.477367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.477383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.477595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.477613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.477767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.477785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.477957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.477974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.478185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.478201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.478410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.478426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.478583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.478599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.478885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.478902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.479066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.479083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.479299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.479316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.479406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.479423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.479578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.479596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.479736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.479752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.479966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.479982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.480143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.480159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.480347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.480362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.480504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-11-27 08:10:23.480517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-11-27 08:10:23.480586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.480599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.480691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.480703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.480862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.480875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.481019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.481033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.481234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.481247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.481455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.481471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.481556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.481567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.481776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.481789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.481942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.481968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.482066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.482078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.482160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.482173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.482331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.482348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.482505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.482517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.482592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.482604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.482808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.482820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.483023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.483037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.483197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.483211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.483453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.483467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.483635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.483647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.483788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.483803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.483966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.483979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.484204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.484216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.484422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.484434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.484532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.484544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.484691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.484705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.484861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.484874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.485072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.485086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.485237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.485249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.485402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.485416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.485596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.485609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.485849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.485862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.485954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-11-27 08:10:23.485965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-11-27 08:10:23.486061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.486073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.486277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.486291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.486399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.486412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.486676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.486688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.486903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.486916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.487058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.487073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.487249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.487268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.487481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.487498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.487718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.487735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.487901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.487917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.488108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.488125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.488210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.488226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.488378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.488394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.488476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.488491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.488641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.488657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.488890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.488905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.489161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.489179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.489339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.489354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.489588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.489605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.489786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.489805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.489999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.490017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.490269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.490285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.490435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.490452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.490628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.490644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.490731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.490747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.490893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.490909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.491127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.491144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.491361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.491377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.491650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.491666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.491894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.491910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.492116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.492134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.492385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.492402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-11-27 08:10:23.492612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-11-27 08:10:23.492628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.492734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.492752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.493005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.493022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.493243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.493259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.493516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.493532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.493743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.493761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.493971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.493988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.494228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.494245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.494405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.494422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.494519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.494535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.494627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.494643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.494874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.494890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.495040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.495056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.495286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.495303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.495537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.495553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.495739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.495752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.495911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.495925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.496086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.496099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.496252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.496264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.496465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.496478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.496630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.496643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.496794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.496807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.496970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.496986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.497134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.497146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.497280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.497293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.497451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.497464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.497663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.497676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.497909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.497925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.498143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-11-27 08:10:23.498156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-11-27 08:10:23.498399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.498412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.498647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.498660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.498914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.498928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.499090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.499102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.499245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.499258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.499352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.499365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.499613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.499626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.499840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.499857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.500094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.500106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.500278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.500291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.500471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.500483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.500576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.500587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.500681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.500693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.500921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.500934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.501174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.501188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.501321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.501334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.501490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.501504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.501735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.501749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.501957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.501971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.502060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.502071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.502273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.502287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.502422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.502435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.502527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.502539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.502685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.502698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.502854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.502867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.503033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.503051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.503227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.503244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.503482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.503499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.503645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.503661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.503877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.503893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.504113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.504130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.504362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.504379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.504539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.504555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-11-27 08:10:23.504696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-11-27 08:10:23.504711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.504943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.504963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.505190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.505206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.505381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.505397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.505628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.505645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.505900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.505920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.506098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.506116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.506325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.506341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.506502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.506518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.506726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.506742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.506958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.506976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.507169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.507186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.507412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.507428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.507660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.507676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.507776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.507793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.507890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.507906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.508064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.508082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.508337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.508353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.508561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.508577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.508756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.508773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.508982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.508999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.509159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.509175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.509269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.509284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.509453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.509470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.509629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.509645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.509916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.509932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.510176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.510195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.510408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.510423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.510657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.510673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.510834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.510850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.510956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.510973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.511181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.511198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.511428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.511448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.511603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.511619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-11-27 08:10:23.511773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-11-27 08:10:23.511790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.511944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.511966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.512151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.512167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.512420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.512437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.512666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.512683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.512864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.512880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.513055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.513072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.513218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.513234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.513443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.513459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.513637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.513653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.513819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.513835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.513924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.513939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.514105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.514122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.514400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.514416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.514641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.514657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.514876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.514892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.515110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.515126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.515390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.515406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.515589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.515606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.515780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.515797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.515935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.515954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.516188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.516205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.516414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.516430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.516662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.516678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.516953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.516970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.517183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.517199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.517413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.517429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.517614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.517630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.517806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.517822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.517963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.517979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.518141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.518157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.518307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.518323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.518554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-11-27 08:10:23.518570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-11-27 08:10:23.518802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.518818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.519026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.519043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.519207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.519224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.519488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.519504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.519758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.519774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.520006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.520025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.520262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.520278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.520511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.520528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.520737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.520753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.520976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.520993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.521273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.521289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.521517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.521534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.521767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.521784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.521935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.521956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.522186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.522202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.522370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.522387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.522596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.522612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.522769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.522785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.522889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.522906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.523072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.523090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.523239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.523255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.523516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.523532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.523703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.523719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.523969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.523986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.524223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.524240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.524475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.524492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.524576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.524591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.524738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.524754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.524927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.524943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.525178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.525194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.525401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-11-27 08:10:23.525417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-11-27 08:10:23.525662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.525678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.525858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.525875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.526086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.526103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.526248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.526265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.526428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.526445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.526647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.526664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.526773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.526789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.526999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.527017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.527106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.527121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.527276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.527293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.527458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.527473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.527686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.527702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.527911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.527928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.528075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.528091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.528314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.528336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.528569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.528586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.528795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.528811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.528960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.528976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.529120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.529137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.529357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.529374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.529588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.529604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.529837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.529853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.530022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.530040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.530272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.530288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.530524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.530540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.530700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.530717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.530953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-11-27 08:10:23.530969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-11-27 08:10:23.531228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.531244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.531467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.531483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.531716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.531733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.531877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.531893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.532124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.532141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.532373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.532389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.532599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.532616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.532709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.532724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.532831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.532847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.533001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.533017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.533263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.533279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.533498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.533514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.533690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.533706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.533931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.533951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.534162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.534179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.534355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.534371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.534536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.534552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.534702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.534718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.534951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.534968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.535147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.535163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.535256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.535271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.535490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.535507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.535767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.535783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.536023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.536039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.536257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.536274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.536529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.536545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.536694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.536710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.536885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.536905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.537138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.537154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.537404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.537420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.537652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.537668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.537824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-11-27 08:10:23.537840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-11-27 08:10:23.538069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.538086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.538228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.538243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.538395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.538412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.538622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.538638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.538832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.538848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.539046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.539063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.539294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.539310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.539540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.539557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.539744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.539760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.539943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.539965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.540122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.540138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.540346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.540363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.540516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.540533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.540685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.540702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.540885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.540902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.541062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.541078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.541286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.541304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.541539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.541555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.541709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.541725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.541884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.541900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.541992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.542008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.542168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.542184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.542354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.542370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.542515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.542532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.542796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.542812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.543069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.543086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.543193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.543209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.543314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.543329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.543490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.543507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.543654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.543669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.543826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.543844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.543932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.543951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-11-27 08:10:23.544114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-11-27 08:10:23.544131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.544283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.544300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.544446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.544462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.544636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.544655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.544910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.544927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.545156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.545173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.545384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.545401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.545621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.545637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.545885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.545902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.546005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.546021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.546123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.546139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.546286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.546301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.546512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.546528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.546766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.546783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.547001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.547017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.547168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.547184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.547391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.547408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.547566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.547582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.547790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.547807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.548066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.548083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.548261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.548278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.548543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.548559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.548774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.548791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.548959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.548974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.549123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.549139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.549282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.549299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.549508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.549524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.549781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.549797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.549956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.549973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.550231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.550248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.550402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.550418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.550582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.550599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.550756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.550772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.551011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-11-27 08:10:23.551028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-11-27 08:10:23.551265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.551281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.551454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.551470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.551701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.551717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.551899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.551915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.552099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.552117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.552349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.552366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.552528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.552545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.552754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.552770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.552928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.552944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.553055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.553074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.553296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.553314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.553546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.553562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.553819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.553835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.554009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.554026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.554223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.554240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.554400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.554417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.554644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.554661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.554805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.554821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.554907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.554922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.555170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.555187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.555345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.555361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.555518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.555535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.555712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.555729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.555886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.555902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.556046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.556063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.556233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.556250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.556404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.556419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.556580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.556597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.556782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.556799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.556885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.556901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.557002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.557017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.557271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-11-27 08:10:23.557288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-11-27 08:10:23.557522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.557538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.557639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.557655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.557812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.557829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.557921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.557936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.558090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.558107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.558341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.558357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.558537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.558553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.558713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.558729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.558939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.558958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.559171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.559187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.559303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.559319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.559467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.559483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.559656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.559672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.559874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.559891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.560068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.560085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.560237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.560254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.560491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.560507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.560692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.560713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.560936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.560967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.561179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.561195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.561347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.561363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.561476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.561491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.561720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.561737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.561834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.561848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.561997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.562013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.562164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.562180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.562429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.562446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.562679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.562695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.562943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.562964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.563231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.563247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.563438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.563454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.563717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.563734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.563941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.563963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.564129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-11-27 08:10:23.564145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-11-27 08:10:23.564329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.564346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.564511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.564527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.564784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.564800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.565009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.565026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.565202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.565218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.565376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.565391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.565499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.565515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.565761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.565777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.565936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.565966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.566149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.566165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.566274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.566291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.566381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.566395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.566501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.566518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.566730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.566746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.566922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.566938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.567105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.567121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.567214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.567229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.567392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.567409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.567573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.567590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.567739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.567755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.567993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.568010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.568086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.568101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.568251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.568268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.568506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.568528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.568714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.568731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-11-27 08:10:23.569008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-11-27 08:10:23.569025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.569179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.569195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.569357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.569373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.569470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.569486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.569694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.569710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.569867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.569883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.570031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.570047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.570200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.570216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.570398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.570414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.570624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.570641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.570873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.570889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.571077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.571094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.571279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.571295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.571533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.571549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.571790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.571806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.572013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.572030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.572141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.572157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.572250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.572265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.572417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.572434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.572537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.572553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.572769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.572786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.572880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.572895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.573093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.573110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.573319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.573336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.573449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.573466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.573724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.573756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.574075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.574091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.574235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.574248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.574468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.574481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.574701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.574713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.574885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.574898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.575064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.721 [2024-11-27 08:10:23.575077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.721 qpair failed and we were unable to recover it. 00:27:29.721 [2024-11-27 08:10:23.575325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.575338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.575539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.575552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.575792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.575805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.576088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.576101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.576246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.576258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.576410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.576422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.576559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.576577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.576667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.576679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.576821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.576833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.577030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.577042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.577148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.577160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.577319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.577331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.577535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.577547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.577718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.577730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.577907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.577919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.578070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.578083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.578226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.578238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.578379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.578391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.578606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.578619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.578824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.578837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.578934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.578945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.579086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.579099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.579337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.579349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.579498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.579510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.579757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.579769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.579968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.579982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.580062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.580074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.580253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.580266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.580414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.580426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.580622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.580635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.580855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.580867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.722 [2024-11-27 08:10:23.580955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.722 [2024-11-27 08:10:23.580967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.722 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.581056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.581068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.581247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.581266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.581428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.581444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.581541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.581555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.581727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.581743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.582010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.582027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.582185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.582201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.582309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.582325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.582478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.582494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.582635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.582652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.582797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.582814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.582968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.582986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.583125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.583142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.583301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.583317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.583494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.583513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.583671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.583687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.583830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.583846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.584003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.584019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.584118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.584135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.584366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.584383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.584629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.584645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.584880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.584896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.585114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.585131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.585293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.585309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.585386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.585401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.585555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.585570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.585802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.585818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.586094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.586112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.586228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.586244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.586344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.586360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.586571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.586586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.586767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.586784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.587065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.587081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.587245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.587261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.723 qpair failed and we were unable to recover it. 00:27:29.723 [2024-11-27 08:10:23.587480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.723 [2024-11-27 08:10:23.587497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.587699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.587715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.587893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.587910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.588073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.588091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.588280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.588295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.588454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.588470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.588611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.588628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.588804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.588819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.588952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.588965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.589166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.589179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.589372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.589384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.589534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.589546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.589793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.589806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.590061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.590074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.590171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.590182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.590268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.590280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.590506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.590518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.590743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.590756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.591004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.591017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.591111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.591122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.591368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.591383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.591590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.591602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.591766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.591778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.592000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.592012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.592224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.592236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.592414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.592427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.592652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.592665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.592813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.592825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.593063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.593075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.593163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.593174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.593382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.593395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.593593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.593606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.593752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.724 [2024-11-27 08:10:23.593764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-11-27 08:10:23.594005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.594018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.594224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.594236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.594394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.594406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.594585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.594597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.594825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.594838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.594986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.594998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.595106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.595118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.595338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.595350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.595520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.595532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.595734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.595746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.595987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.596000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.596226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.596239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.596415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.596427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.596565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.596577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.596666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.596684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.596896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.596912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.597071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.597087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.597222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.597239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.597391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.597407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.597555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.597572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.597810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.597827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.598036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.598053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.598282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.598298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.598547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.598563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.598779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.598796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.598941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.725 [2024-11-27 08:10:23.598962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-11-27 08:10:23.599117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.599133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.599307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.599326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.599402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.599417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.599577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.599594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.599750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.599766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.599936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.599957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.600120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.600136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.600346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.600362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.600459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.600476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.600647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.600663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.600818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.600834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.600995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.601012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.601119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.601135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.601306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.601323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.601473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.601489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.601709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.601726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.601935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.601957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.602120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.602135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.602314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.602331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.602432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.602447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.602597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.602613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.602824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.602841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.602993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.603010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.603225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.603241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.603452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.603468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.603676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.603693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.603872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.603888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.603983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.603998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.604249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.604264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.604406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.604418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.604648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.604661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.604876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.604888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.605039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.726 [2024-11-27 08:10:23.605052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-11-27 08:10:23.605280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.605293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.605441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.605453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.605605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.605617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.605755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.605768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.605900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.605912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.606087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.606100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.606266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.606279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.606507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.606520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.606665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.606678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.606765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.606776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.607028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.607040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.607264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.607277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.607503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.607515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.607660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.607673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.607897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.607909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.608132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.608145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.608344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.608357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.608519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.608531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.608603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.608615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.608839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.608851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.609002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.609015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.609196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.609209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.609365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.609378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.609514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.609526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.609702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.609714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.609844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.609857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.609986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.609999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.610089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.610100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.610234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.610245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.610472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.610485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.610643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.610655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.727 qpair failed and we were unable to recover it. 00:27:29.727 [2024-11-27 08:10:23.610880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.727 [2024-11-27 08:10:23.610892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.610992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.611004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.611174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.611187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.611330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.611342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.611542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.611557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.611724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.611737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.611939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.611956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.612047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.612058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.612290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.612302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.612473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.612486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.612563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.612574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.612790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.612802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.612956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.612968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.613192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.613204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.613418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.613430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.613603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.613616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.613772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.613784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.613984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.613997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.614222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.614234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.614386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.614398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.614634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.614646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.614875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.614887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.615018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.615031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.615264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.615276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.615508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.615520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.615665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.615677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.615771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.615784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.615956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.615970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.616057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.616069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.616145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.616156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.616378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.616391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.616542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.616554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.728 qpair failed and we were unable to recover it. 00:27:29.728 [2024-11-27 08:10:23.616701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.728 [2024-11-27 08:10:23.616713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.616845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.616857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.617082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.617095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.617310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.617322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.617474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.617486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.617654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.617667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.617836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.617849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.618098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.618111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.618336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.618349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.618552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.618564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.618785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.618797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.619020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.619034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.619234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.619248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.619386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.619398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.619581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.619594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.619676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.619687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.619914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.619927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.620075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.620088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.620163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.620174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.620339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.620351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.620496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.620508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.620672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.620684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.620960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.620973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.621200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.621212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.621373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.621386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.621583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.621595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.621743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.621755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.621980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.621993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.622216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.622228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.622396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.622408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.622545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.622557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.729 [2024-11-27 08:10:23.622780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.729 [2024-11-27 08:10:23.622798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.729 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.622951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.622964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.623041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.623053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.623252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.623265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.623439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.623452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.623602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.623614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.623839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.623851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.624072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.624086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.624341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.624354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.624488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.624500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.624658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.624671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.624751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.624761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.624910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.624923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.625094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.625106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.625283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.625296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.625384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.625394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.625545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.625557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.625660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.625672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.625874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.625886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.626111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.626124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.626275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.626288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.626549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.626565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.626815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.626828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.627056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.627069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.627156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.627167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.627331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.627345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.627596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.627609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.627757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.627770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.627985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.627998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.730 qpair failed and we were unable to recover it. 00:27:29.730 [2024-11-27 08:10:23.628141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.730 [2024-11-27 08:10:23.628153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.628403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.628416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.628561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.628573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.628797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.628810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.628945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.628961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.629104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.629116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.629342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.629355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.629529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.629541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.629617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.629628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.629830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.629843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.630055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.630068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.630273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.630286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.630429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.630441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.630533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.630544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.630791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.630803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.631005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.631018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.631185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.631197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.631418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.631431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.631581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.631593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.631739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.631751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.631928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.631940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.632079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.632092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.632323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.632336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.632475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.632487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.632573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.632584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.632784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.632796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.633019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.633031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.633312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.633325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.633532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.633545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.633787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.633800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.634051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.634063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.634210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.634223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.731 [2024-11-27 08:10:23.634447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.731 [2024-11-27 08:10:23.634461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.731 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.634640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.634653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.634856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.634868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.635022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.635036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.635181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.635193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.635335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.635347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.635618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.635631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.635775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.635788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.635926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.635938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.636026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.636037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.636247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.636259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.636419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.636431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.636582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.636594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.636759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.636772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.636915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.636927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.637131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.637144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.637225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.637236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.637323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.637334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.637577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.637590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.637751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.637764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.637997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.638010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.638124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.638136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.638270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.638282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.638416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.638429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.638578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.638589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.638756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.638768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.638940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.638956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.639216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.639229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.639365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.639377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.639620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.639633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.639776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.639789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.732 qpair failed and we were unable to recover it. 00:27:29.732 [2024-11-27 08:10:23.640038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.732 [2024-11-27 08:10:23.640051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.640199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.640211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.640399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.640411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.640629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.640642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.640791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.640803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.640960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.640972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.641118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.641130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.641225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.641236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.641379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.641392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.641590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.641605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.641842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.641855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.642083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.642096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.642328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.642340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.642575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.642587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.642656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.642668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.642814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.642825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.642973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.642987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.643137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.643149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.643287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.643299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.643523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.643536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.643690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.643701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.643897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.643908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.644004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.644015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.644186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.644199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.644352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.644364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.644507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.644519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.644732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.644745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.644893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.644905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.645081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.645094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.645315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.645327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.645483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.645495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.645668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.645680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.645763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.645775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.645964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.733 [2024-11-27 08:10:23.645977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.733 qpair failed and we were unable to recover it. 00:27:29.733 [2024-11-27 08:10:23.646220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.646232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.646461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.646473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.646678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.646691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.646783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.646796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.647042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.647055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.647229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.647242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.647413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.647426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.647594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.647606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.647828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.647841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.648074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.648088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.648240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.648252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.648444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.648457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.648608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.648620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.648766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.648778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.648930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.648942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.649204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.649219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.649369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.649381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.649467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.649478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.649694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.649706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.649937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.649956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.650212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.650224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.650427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.650439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.650607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.650618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.650697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.650708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.650867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.650879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.650972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.650984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.651186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.651199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.651348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.651360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.651514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.651527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.651665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.651678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.651933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.651945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.652168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.652180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.734 qpair failed and we were unable to recover it. 00:27:29.734 [2024-11-27 08:10:23.652400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.734 [2024-11-27 08:10:23.652413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.652616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.652629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.652855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.652867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.653020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.653033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.653204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.653216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.653362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.653374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.653526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.653539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.653777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.653789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.653996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.654009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.654240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.654253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.654404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.654416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.654627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.654640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.654841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.654854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.655006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.655019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.655254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.655266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.655429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.655441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.655592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.655605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.655805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.655818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.656038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.656050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.656192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.656204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.656360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.656372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.656550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.656562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.656785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.656797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.657037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.657052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.657288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.657306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.657443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.657456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.657666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.657678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.657831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.657843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.657926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.657938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.658176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.658189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.658387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.658399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.658555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.658567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.735 qpair failed and we were unable to recover it. 00:27:29.735 [2024-11-27 08:10:23.658770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.735 [2024-11-27 08:10:23.658783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.658958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.658970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.659124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.659136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.659290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.659302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.659469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.659481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.659643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.659655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.659790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.659802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.660003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.660015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.660149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.660161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.660319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.660331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.660500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.660512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.660715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.660728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.660872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.660884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.661122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.661135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.661290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.661302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.661554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.661566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.661746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.661759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.661967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.661985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.662272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.662294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.662456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.662472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.662727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.662744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.662856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.662872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.663043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.663060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.663243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.663260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.663472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.663488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.663652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.663668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.663901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.663917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.664094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.664110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.664203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.736 [2024-11-27 08:10:23.664219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.736 qpair failed and we were unable to recover it. 00:27:29.736 [2024-11-27 08:10:23.664453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.664470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.664724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.664740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.664922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.664942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.665117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.665133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.665287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.665303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.665511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.665526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.665747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.665764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.666047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.666064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.666275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.666292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.666548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.666565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.666705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.666721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.666902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.666919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.667161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.667178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.667360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.667376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.667528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.667545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.667698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.667714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.667902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.667919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.668181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.668198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.668371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.668387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.668483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.668499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.668731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.668748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.668901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.668918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.669091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.669108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.669254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.669270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.669502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.669518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.669621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.669638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.669794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.669810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.669963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.669981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.670215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.670231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.670408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.670422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.737 qpair failed and we were unable to recover it. 00:27:29.737 [2024-11-27 08:10:23.670578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.737 [2024-11-27 08:10:23.670590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.670759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.670770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.670970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.670983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.671073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.671085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.671316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.671328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.671509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.671521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.671607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.671619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.671715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.671726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.671945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.671963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.672124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.672136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.672347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.672359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.672494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.672507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.672595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.672610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.672780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.672792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.673016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.673028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.673202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.673214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.673439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.673452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.673613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.673625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.673779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.673791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.674013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.674026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.674268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.674280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.674360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.674372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.674571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.674584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.674729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.674742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.674894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.674907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.675071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.675083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.675155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.675166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.675366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.675378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.675515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.675528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.675723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.675736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.675977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.675990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.738 qpair failed and we were unable to recover it. 00:27:29.738 [2024-11-27 08:10:23.676130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.738 [2024-11-27 08:10:23.676142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.676295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.676307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.676484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.676496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.676635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.676647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.676808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.676821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.676981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.676994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.677070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.677082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.677280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.677293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.677373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.677384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.677520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.677534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.677734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.677746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.677911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.677923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.678154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.678167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.678311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.678324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.678473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.678486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.678626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.678639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.678845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.678858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.679088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.679101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.679250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.679262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.679473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.679486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.679683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.679695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.679923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.679937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.680129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.680141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.680336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.680350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.680572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.680584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.680767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.680779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.680955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.680969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.681059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.681070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.681209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.681222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.681363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.681377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.681457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.681468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.681708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.739 [2024-11-27 08:10:23.681721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.739 qpair failed and we were unable to recover it. 00:27:29.739 [2024-11-27 08:10:23.681951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.681963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.682114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.682126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.682267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.682279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.682527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.682540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.682759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.682771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.682943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.682960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.683106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.683118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.683212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.683224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.683422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.683435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.683585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.683597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.683798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.683811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.683888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.683899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.684125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.684138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.684285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.684297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.684448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.684460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.684593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.684604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.684745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.684757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.684900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.684912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.685077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.685090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.685224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.685236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.685389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.685402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.685617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.685636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.685874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.685885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.686041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.686054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.686232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.686245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.686399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.686411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.686561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.686573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.686747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.686760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.686906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.686918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.687067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.687082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.687313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.687325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.687480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.687492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.687734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.687747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.740 [2024-11-27 08:10:23.687993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.740 [2024-11-27 08:10:23.688005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.740 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.688231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.688244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.688489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.688501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.688705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.688718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.688888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.688900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.689050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.689063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.689290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.689303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.689448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.689460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.689622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.689635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.689769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.689781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.689962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.689975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.690062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.690074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.690156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.690168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.690388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.690401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.690606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.690618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.690763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.690776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.690926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.690938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.691077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.691089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.691310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.691323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.691417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.691429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.691572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.691584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.691787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.691799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.692032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.692045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.692269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.692282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.692372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.692383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.741 [2024-11-27 08:10:23.692535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.741 [2024-11-27 08:10:23.692547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.741 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.692775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.692787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.693022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.693035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.693243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.693256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.693396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.693408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.693646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.693659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.693808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.693820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.694018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.694032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.694234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.694246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.694472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.694485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.694636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.694648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.694867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.694881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.695106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.695119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.695284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.695296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.695502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.695515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.695735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.695748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.695883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.695902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.696060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.696072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.696204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.696216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.696361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.696373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.696591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.696604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.696744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.696756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.696917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.696930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.697154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.697167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.697315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.697327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.697550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.697563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.697722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.697734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.697951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.697964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.698118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.698130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.698307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.698319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.698474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.698487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.698733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.698746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.698897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.698909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.699056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.742 [2024-11-27 08:10:23.699068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.742 qpair failed and we were unable to recover it. 00:27:29.742 [2024-11-27 08:10:23.699216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.699228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.699307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.699318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.699404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.699415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.699575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.699587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.699732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.699745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.699970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.699983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.700202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.700214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.700479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.700492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.700719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.700732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.700877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.700890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.701115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.701128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.701210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.701222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.701467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.701479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.701576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.701587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.701719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.701729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.701910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.701920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.702118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.702135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.702226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.702237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.702438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.702451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.702661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.702673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.702916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.702930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.703175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.703188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.703287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.703298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.703542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.703554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.703717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.703731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.703906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.703920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.704015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.704027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.704228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.704241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.704442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.704454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.704546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.704557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.704659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.704670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.743 [2024-11-27 08:10:23.704827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.743 [2024-11-27 08:10:23.704840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.743 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.704976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.704990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.705194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.705208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.705437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.705450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.705667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.705680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.705907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.705920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.706070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.706083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.706250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.706263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.706433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.706446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.706594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.706607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.706762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.706775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.707000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.707012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.707092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.707103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.707276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.707292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.707435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.707450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.707598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.707611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.707841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.707855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.708004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.708018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.708231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.708244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.708323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.708335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.708436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.708449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.708534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.708545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.708698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.708711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.708860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.708873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.708976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.708991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.709219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.709233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.709319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.709330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.709508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.709520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.709775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.709788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.709939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.709963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.710118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.710130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.710347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.710359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.710518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.710531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.744 qpair failed and we were unable to recover it. 00:27:29.744 [2024-11-27 08:10:23.710797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.744 [2024-11-27 08:10:23.710812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.711087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.711101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.711249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.711262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.711421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.711433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.711583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.711597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.711868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.711882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.712033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.712047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.712255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.712269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.712503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.712518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.712681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.712693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.712874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.712886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.713127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.713141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.713392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.713405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.713633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.713645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.713865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.713879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.714040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.714054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.714250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.714263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.714400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.714412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.714614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.714626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.714803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.714816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.714971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.714986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.715235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.715248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.715458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.715470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.715729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.715743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.715995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.716008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.716222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.716234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.716394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.716407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.716626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.716638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.716854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.716869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.716972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.716984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.717132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.717146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.745 [2024-11-27 08:10:23.717399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.745 [2024-11-27 08:10:23.717413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.745 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.717500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.717512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.717662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.717675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.717822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.717836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.717926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.717937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.718200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.718219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.718315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.718330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.718546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.718562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.718664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.718679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.718768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.718784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.718959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.718975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.719135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.719152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.719256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.719272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.719429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.719445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.719601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.719617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.719847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.719864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.720133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.720150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.720248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.720262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.720515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.720532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.720624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.720639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.720876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.720892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.721142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.721159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.721392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.721408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.721652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.721668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.721826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.721841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.722045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.722064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.722217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.722233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.722381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.722398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.722555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.722571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.722738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.722759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.722970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.722987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.723229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.723245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.723455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.723473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.723644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.723661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.723801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.723817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.746 [2024-11-27 08:10:23.723975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.746 [2024-11-27 08:10:23.723992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.746 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.724167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.724183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.724360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.724377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.724609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.724626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.724765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.724782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.724870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.724885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.725059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.725077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.725343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.725359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.725464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.725480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.725713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.725730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.725990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.726009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.726158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.726175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.726346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.726363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.726572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.726590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.726739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.726755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.726930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.726946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.727202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.727218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.727370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.727387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.727557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.727574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.727734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.727749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.727966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.727984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.728209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.728225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.728373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.728388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.728549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.728566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.728674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.728690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.728860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.728877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.729090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.729107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.729199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.729215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.729391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.729408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.729571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.747 [2024-11-27 08:10:23.729588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.747 qpair failed and we were unable to recover it. 00:27:29.747 [2024-11-27 08:10:23.729817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.729833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.730085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.730103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.730336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.730353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.730465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.730483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.730586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.730609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.730843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.730860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.731094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.731112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.731272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.731288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.731435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.731452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.731615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.731632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.731788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.731805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.731992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.732010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.732189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.732205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.732366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.732383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.732616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.732633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.732742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.732759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.732908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.732924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.733096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.733113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.733277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.733293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.733451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.733469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.733621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.733636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.733713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.733728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.733822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.733837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.734046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.734064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.734222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.734239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.734401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.734417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.734675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.734691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.734837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.734854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.734953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.734969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.735169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.735185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.735357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.735375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.735602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.735619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.735827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.735844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.736010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.736028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.736264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.736281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.736376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.748 [2024-11-27 08:10:23.736390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.748 qpair failed and we were unable to recover it. 00:27:29.748 [2024-11-27 08:10:23.736557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.736573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.736780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.736797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.736957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.736973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.737190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.737207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.737381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.737398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.737653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.737669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.737902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.737918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.738027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.738042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.738185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.738204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.738351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.738368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.738530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.738547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.738811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.738829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.739068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.739085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.739246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.739264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.739410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.739427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.739579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.739595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.739758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.739776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.740012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.740030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.740134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.740150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.740299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.740315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.740465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.740482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.740664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.740681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.740837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.740854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.740972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.740990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.741084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.741100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.741356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.741373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.741482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.741498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.741749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.741765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.741944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.741965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.742163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.742182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.742404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.742421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.742583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.742600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.742695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.742710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.742944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.742966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.743150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.743165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.743349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.743366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.749 [2024-11-27 08:10:23.743518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.749 [2024-11-27 08:10:23.743535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.749 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.743645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.743661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.743813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.743830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.744037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.744056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.744198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.744214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.744377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.744394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.744496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.744510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.744684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.744702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.744872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.744889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.745122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.745139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.745336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.745353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.745529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.745545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.745715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.745736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.745975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.745993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.746208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.746226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.746403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.746420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.746681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.746698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.746881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.746897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.747063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.747080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.747328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.747345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.747463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.747480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.747734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.747752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.747921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.747942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.748055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.748072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.748222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.748239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.748383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.748399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.748602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.748618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.748725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.748742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.748899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.748915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.749115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.749133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.749296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.749313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.749510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.749527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.749689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.749706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.749920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.749936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.750046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.750062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.750216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.750233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.750461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.750478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.750696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-11-27 08:10:23.750713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.750 qpair failed and we were unable to recover it. 00:27:29.750 [2024-11-27 08:10:23.750974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.750990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.751099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.751114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.751283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.751299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.751449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.751467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.751570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.751586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.751755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.751771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.752000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.752018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.752288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.752306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.752460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.752476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.752697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.752714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.752922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.752939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.753089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.753106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.753315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.753333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.753595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.753613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.753720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.753739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.753935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.753965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.754132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.754148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.754312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.754328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.754560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.754577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.754760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.754776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.754941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.754965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.755065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.755081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.755292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.755309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.755459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.755476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.755626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.755644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.755864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.755880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.756029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.756046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.756148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.756166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.756258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.756273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.756495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.756511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.751 [2024-11-27 08:10:23.756689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-11-27 08:10:23.756705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.751 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.756902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.756920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.757183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.757201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.757368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.757384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.757608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.757625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.757835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.757851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.758087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.758104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.758275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.758291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.758460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.758477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.758734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.758752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.758931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.758952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.759176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.759194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.759348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.759366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.759472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.759489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.759664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.759681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.759907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.759925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.760108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.760126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.760339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.760356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.760564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.760581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.760720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.760736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.760900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.760917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.761080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.761098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.761260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.761276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.761507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.761524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.761734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.761753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.761908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.761925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.762155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.762173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.762365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.762382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.762593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.762609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.762818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.762834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.762927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.762943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.763116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.763133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.763241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.763257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.763408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.763425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.763602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.763618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.763780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.763797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.764038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.764056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.764291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.764307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-11-27 08:10:23.764479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-11-27 08:10:23.764496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.764704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.764722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.764812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.764828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.764916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.764931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.765091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.765108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.765334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.765350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.765439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.765454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.765609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.765626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.765783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.765800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.765960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.765977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.766130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.766147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.766288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.766304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.766461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.766479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.766692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.766708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.766934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.766956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.767192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.767209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.767361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.767378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.767484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.767500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.767712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.767729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.767884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.767900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.768006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.768021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.768252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.768270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.768484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.768501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.768600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.768614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.768755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.768772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.768958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.768976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.769077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.769099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.769193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.769209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.769354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.769370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.769461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.769478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.769640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.769656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.769743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.769758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.769914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.769931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.770108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27b20 is same with the state(6) to be set 00:27:29.753 [2024-11-27 08:10:23.770259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.770279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.770370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.770383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.770523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.770536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.770629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.770640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.770796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.770810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.770958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.770972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-11-27 08:10:23.771057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.753 [2024-11-27 08:10:23.771073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.771151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.771163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.771352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.771364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.771447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.771459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.771710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.771723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.771817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.771829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.771960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.771974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.772050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.772061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.772145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.772156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.772416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.772430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.772565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.772577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.772665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.772677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.772827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.772841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.772994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.773007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.773091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.773103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.773184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.773196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.773282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.773294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.773449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.773461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.773545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.773557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.773706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.773721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.773852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.773864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.773942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.773959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.774091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.774103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.774242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.774255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.774401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.774415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.774484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.774497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.774582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.774594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.774767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.774780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.774953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.774970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.775118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.775130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.775273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.775286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.775358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.775369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.775456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.775468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.775538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.775550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.775633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.775647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.775748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.775760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.775916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.775929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.776028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.776041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.754 [2024-11-27 08:10:23.776180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.754 [2024-11-27 08:10:23.776193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.754 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.776346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.776360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.776499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.776515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.776598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.776609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.776686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.776698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.776837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.776850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.777015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.777030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.777181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.777198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.777344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.777357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.777440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.777451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.777551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.777562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.777744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.777758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.777838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.777849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.777999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.778014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.778095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.778107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.778253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.778265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.778437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.778450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.778550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.778562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.778694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.778707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.778930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.778944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.779093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.779105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.779185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.779197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.779402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.779415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.779555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.779567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.779650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.779664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.779739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.779750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.779903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.779923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.780145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.780162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.780265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.780281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.780499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.780515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.780608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.780624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.780774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.780790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.780885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.780901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.781051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.781076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.781156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.781172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.781263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.781280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.781364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.781381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.781558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.781576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.781739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.781754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.781894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.755 [2024-11-27 08:10:23.781910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.755 qpair failed and we were unable to recover it. 00:27:29.755 [2024-11-27 08:10:23.782061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.782075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.782161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.782175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.782272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.782287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.782464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.782477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.782549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.782560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.782634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.782647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.782798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.782811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.782886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.782897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.783033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.783045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.783128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.783140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.783292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.783305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.783379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.783391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.783482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.783496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.783639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.783652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.783734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.783746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.783835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.783848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.783916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.783927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.784073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.784086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.784170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.784182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.784320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.784334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.784483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.784496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.784572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.784583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.784729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.784742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.784979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.784995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.785064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.785074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.785158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.785169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.785253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.785266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.785357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.785369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.785512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.785525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.785658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.785670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.785808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.785823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.785905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.785916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.786001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.786015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.756 qpair failed and we were unable to recover it. 00:27:29.756 [2024-11-27 08:10:23.786153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.756 [2024-11-27 08:10:23.786165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.786228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.786240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.786371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.786383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.786450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.786462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.786545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.786558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.786641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.786653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.786727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.786738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.786817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.786829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.786966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.786980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.787069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.787083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.787149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.787161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.787237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.787250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.787381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.787393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.787464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.787475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.787547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.787559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.787653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.787667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.787812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.787825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.788088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.788104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.788253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.788266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.788468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.788481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.788561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.788572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.788701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.788714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.788881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.788894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.788964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.788977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.789127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.789138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.757 qpair failed and we were unable to recover it. 00:27:29.757 [2024-11-27 08:10:23.789331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.757 [2024-11-27 08:10:23.789343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.789523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.789536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.789672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.789684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.789751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.789762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.789839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.789852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.789922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.789933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.790086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.790099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.790238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.790251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.790345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.790358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.790443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.790455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.790605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.790618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.790748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.790760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.790840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.790852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.791091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.791105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.791186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.791196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.791277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.791289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.791491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.791503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.791649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.791661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.791864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.791878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.791958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.791969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.792055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.792067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.792143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.792156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.792232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.792244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.792478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.792491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.792588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.792603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.792702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.792715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.792860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.792872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.793007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.793020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.793154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.793165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.793291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.793303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.793433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.793447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:29.758 [2024-11-27 08:10:23.793513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.758 [2024-11-27 08:10:23.793524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:29.758 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.793668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.793679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.793818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.793831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.793905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.793918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.794003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.794015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.794094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.794105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.794193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.794206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.794295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.794308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.794453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.794466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.794528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.794539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.794618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.794629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.794767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.794780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.794865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.794876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.795031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.795045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.795132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.795144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.795231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.795243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.795324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.795336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.795482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.795495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.795561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.795572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.795658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.795671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.795825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.795843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.796060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.038 [2024-11-27 08:10:23.796076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.038 qpair failed and we were unable to recover it. 00:27:30.038 [2024-11-27 08:10:23.796172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.796189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.796366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.796382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.796559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.796576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.796654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.796671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.796903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.796919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.797094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.797112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.797254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.797270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.797378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.797394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.797626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.797643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.797780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.797799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.797942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.797963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.798068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.798087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.798194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.798211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.798363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.798380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.798468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.798484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.798589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.798605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.798819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.798834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.798929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.798942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.799205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.799218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.799301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.799313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.799464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.799476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.799622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.799634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.799766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.799780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.799923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.799937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.800027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.800039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.800197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.800209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.800355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.800368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.800450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.800463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.800531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.800543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.800746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.800759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.800906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.800917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.039 [2024-11-27 08:10:23.801013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.039 [2024-11-27 08:10:23.801027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.039 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.801172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.801184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.801329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.801342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.801545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.801558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.801633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.801645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.801734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.801746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.801838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.801850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.801964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.801976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.802104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.802117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.802210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.802223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.802351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.802364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.802543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.802555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.802652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.802664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.802805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.802820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.802965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.802977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.803072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.803085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.803310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.803322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.803472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.803484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.803633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.803647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.803785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.803798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.803956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.803972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.804063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.804076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.804272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.804284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.804430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.804447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.804600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.804612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.804788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.804802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.805057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.805070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.805177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.805189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.805325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.040 [2024-11-27 08:10:23.805338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.040 qpair failed and we were unable to recover it. 00:27:30.040 [2024-11-27 08:10:23.805552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.805564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.805778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.805790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.806016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.806031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.806235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.806248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.806420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.806433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.806663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.806675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.806757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.806769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.806859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.806872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.807023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.807037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.807181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.807194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.807278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.807289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.807372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.807385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.807467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.807479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.807628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.807641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.807725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.807738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.807807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.807819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.807917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.807930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.808014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.808026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.808147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.808159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.808237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.808249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.808340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.808353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.808485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.808497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.808576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.808588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.808721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.808732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.808919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.808933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.809013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.809025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.809183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.809196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.809349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.809363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.809426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.809437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.809660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.809672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.041 qpair failed and we were unable to recover it. 00:27:30.041 [2024-11-27 08:10:23.809815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-27 08:10:23.809827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.809904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.809919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.810072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.810086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.810157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.810168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.810245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.810258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.810397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.810410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.810487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.810498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.810642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.810653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.810789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.810805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.810869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.810880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.811160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.811173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.811340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.811353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.811435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.811447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.811530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.811543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.811614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.811626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.811830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.811843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.812063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.812077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.812254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.812267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.812405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.812416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.812505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.812518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.812664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.812676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.812763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.812776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.812967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.812979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.813148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.813162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.813331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.813344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.813441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.813452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.813726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.813739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.813983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.813997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.042 qpair failed and we were unable to recover it. 00:27:30.042 [2024-11-27 08:10:23.814200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-27 08:10:23.814219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.814308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.814323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.814503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.814520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.814745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.814762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.814923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.814940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.815067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.815084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.815180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.815197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.815345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.815361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.815511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.815528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.815705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.815720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.815930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.815943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.816176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.816188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.816359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.816371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.816598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.816613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.816699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.816712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.816927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.816940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.817151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.817163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.817264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.817277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.817480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.817493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.817663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.817676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.817832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.817845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.818019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.818031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.818101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.818114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.818345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.818359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.818456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.818469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.818543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.818555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.818758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.818770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.818975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.818987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.819152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.819165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.043 [2024-11-27 08:10:23.819383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-27 08:10:23.819396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.043 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.819605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.819616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.819758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.819769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.819917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.819930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.820153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.820167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.820294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.820307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.820508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.820521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.820705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.820717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.820918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.820932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.821111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.821125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.821223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.821235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.821333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.821363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.821518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.821536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.821706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.821722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.821936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.821958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.822140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.822156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.822335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.822352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.822570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.822585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.822693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.822710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.822866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.822883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.823112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.823129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.823275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.823291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.823518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.823535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.044 qpair failed and we were unable to recover it. 00:27:30.044 [2024-11-27 08:10:23.823739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.044 [2024-11-27 08:10:23.823755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.823988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.824004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.824227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.824243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.824425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.824442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.824677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.824693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.824856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.824873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.825034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.825050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.825156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.825173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.825336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.825352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.825559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.825576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.825812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.825828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.826000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.826018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.826257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.826273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.826454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.826472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.826633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.826648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.826810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.826829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.827076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.827094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.827304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.827319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.827559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.827576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.827734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.827751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.827983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.828001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.828087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.828103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.828354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.828371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.828583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.828600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.828860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.828876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.829131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.829149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.829303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.829319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.829470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.829486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.829593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.829609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.045 [2024-11-27 08:10:23.829703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.045 [2024-11-27 08:10:23.829720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.045 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.829978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.829996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.830143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.830160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.830247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.830262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.830402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.830419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.830520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.830538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.830793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.830810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.831038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.831055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.831252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.831268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.831497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.831513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.831723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.831740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.832009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.832026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.832259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.832276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.832427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.832444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.832547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.832563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.832709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.832726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.832892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.832910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.833067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.833084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.833192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.833209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.833447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.833464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.833672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.833689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.833899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.833916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.834071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.834089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.834299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.834317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.834528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.834545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.834705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.834721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.834930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.834953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.835058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.835076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.835251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.835267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.835411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.835424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.046 qpair failed and we were unable to recover it. 00:27:30.046 [2024-11-27 08:10:23.835642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.046 [2024-11-27 08:10:23.835656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.835796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.835809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.835956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.835970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.836118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.836131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.836285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.836298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.836381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.836393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.836544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.836556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.836707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.836721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.836890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.836902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.837102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.837118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.837261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.837273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.837442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.837455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.837588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.837601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.837817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.837830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.838033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.838046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.838146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.838158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.838318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.838332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.838551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.838563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.838715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.838729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.838961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.838975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.839115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.839127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.839275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.839291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.839446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.839459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.839662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.839675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.839762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.839773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.839951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.839966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.840182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.840196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.840420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.840434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.840585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.840598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.047 [2024-11-27 08:10:23.840814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.047 [2024-11-27 08:10:23.840828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.047 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.840977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.840993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.841171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.841185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.841325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.841337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.841470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.841485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.841619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.841631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.841725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.841736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.841876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.841890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.842040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.842059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.842260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.842274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.842509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.842523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.842710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.842723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.842923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.842937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.843056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.843070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.843170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.843183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.843267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.843279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.843356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.843367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.843515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.843529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.843670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.843682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.843881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.843894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.844147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.844161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.844329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.844341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.844495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.844509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.844760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.844772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.844877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.844890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.844985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.844997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.845072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.845085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.845315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.845330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.845501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.845514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.845649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.048 [2024-11-27 08:10:23.845662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.048 qpair failed and we were unable to recover it. 00:27:30.048 [2024-11-27 08:10:23.845890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.845902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.846149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.846163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.846338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.846350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.846414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.846425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.846629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.846643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.846859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.846872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.847012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.847027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.847124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.847137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.847220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.847231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.847366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.847379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.847524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.847536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.847732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.847746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.847880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.847892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.848118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.848132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.848219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.848230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.848395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.848407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.848553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.848568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.848671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.848686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.848936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.848963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.849066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.849078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.849279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.849293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.849515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.849527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.849692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.849705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.849859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.849871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.850016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.850030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.850176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.850188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.850357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.850369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.850540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.850553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.850696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.850711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.049 qpair failed and we were unable to recover it. 00:27:30.049 [2024-11-27 08:10:23.850816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.049 [2024-11-27 08:10:23.850828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.851052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.851067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.851212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.851226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.851480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.851494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.851668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.851682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.851937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.851958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.852028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.852039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.852213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.852226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.852363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.852380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.852562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.852574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.852811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.852823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.852920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.852932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.853072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.853085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.853249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.853263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.853467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.853480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.853634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.853647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.853826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.853839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.854090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.854106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.854341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.854354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.854488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.854502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.854587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.854599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.854677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.854688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.854870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.854883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.855020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.855034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.855167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.050 [2024-11-27 08:10:23.855179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.050 qpair failed and we were unable to recover it. 00:27:30.050 [2024-11-27 08:10:23.855410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.855422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.855571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.855584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.855720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.855736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.855825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.855837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.855973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.855989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.856160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.856173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.856324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.856336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.856587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.856601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.856683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.856695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.856787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.856799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.857013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.857027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.857237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.857249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.857411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.857426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.857509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.857520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.857721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.857734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.857891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.857904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.858123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.858140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.858289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.858302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.858509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.858522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.858668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.858680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.858769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.858781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.858914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.858927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.859068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.859082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.859235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.859248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.859361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.859373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.859472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.859484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.859617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.859630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.859901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.859915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.860081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.051 [2024-11-27 08:10:23.860095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.051 qpair failed and we were unable to recover it. 00:27:30.051 [2024-11-27 08:10:23.860256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.860269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.860428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.860441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.860599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.860611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.860774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.860787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.860942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.860960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.861190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.861203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.861452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.861467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.861705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.861724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.861859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.861871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.862092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.862105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.862337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.862352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.862509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.862521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.862752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.862766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.862939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.862965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.863116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.863129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.863269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.863284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.863484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.863497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.863598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.863611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.863837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.863851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.863941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.863959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.864114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.864127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.864215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.864230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.864364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.864377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.864527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.864541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.864686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.864698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.864922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.864935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.865112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.865126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.865338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.865352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.865453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.865466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.052 [2024-11-27 08:10:23.865567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.052 [2024-11-27 08:10:23.865578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.052 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.865730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.865742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.865819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.865830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.865938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.865959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.866113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.866125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.866272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.866286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.866530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.866543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.866691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.866703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.866908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.866924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.867130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.867144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.867230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.867241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.867343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.867356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.867558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.867570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.867722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.867736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.867909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.867921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.868171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.868184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.868413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.868426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.868505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.868518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.868681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.868695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.868922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.868934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.869040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.869055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.869310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.869322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.869466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.869479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.869560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.869572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.869728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.869741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.869891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.869904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.870065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.870080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.870147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.870158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.870320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.870332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.053 [2024-11-27 08:10:23.870489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.053 [2024-11-27 08:10:23.870503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.053 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.870710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.870722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.870876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.870887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.871041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.871054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.871152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.871166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.871389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.871402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.871486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.871498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.871681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.871695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.871834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.871847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.872063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.872076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.872151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.872162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.872233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.872244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.872477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.872491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.872569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.872581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.872744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.872757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.872853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.872865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.873063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.873078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.873300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.873313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.873539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.873553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.873777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.873790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.873928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.873940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.874035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.874047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.874204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.874217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.874294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.874306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.874470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.874483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.874589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.874602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.874804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.874818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.874903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.874915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.875083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.875098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.054 [2024-11-27 08:10:23.875269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.054 [2024-11-27 08:10:23.875282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.054 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.875426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.875438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.875642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.875655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.875899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.875913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.876180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.876194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.876290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.876303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.876455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.876468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.876624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.876636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.876781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.876798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.876953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.876966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.877061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.877071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.877211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.877224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.877306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.877317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.877414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.877426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.877642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.877656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.877812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.877824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.877998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.878011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.878189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.878202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.878284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.878296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.878526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.878539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.878739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.878751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.879013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.879027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.879181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.879194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.879361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.879373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.879534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.879548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.879709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.879721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.879927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.879940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.880129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.880143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.055 qpair failed and we were unable to recover it. 00:27:30.055 [2024-11-27 08:10:23.880313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-27 08:10:23.880325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.880556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.880569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.880739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.880752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.880964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.880978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.881207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.881220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.881367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.881379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.881530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.881543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.881631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.881642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.881809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.881823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.881966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.881980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.882145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.882158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.882368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.882381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.882458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.882470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.882647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.882659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.882745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.882756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.882911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.882924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.883169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.883184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.883387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.883400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.883656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.883669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.883922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.883935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.884971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.885002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.885245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.885260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.885409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.885423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.885562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.885577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.885724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.885736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.885986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.886001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.886167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.886180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.886410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.886423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.886517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.886528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.056 [2024-11-27 08:10:23.886793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.056 [2024-11-27 08:10:23.886805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.056 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.886973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.886988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.887216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.887230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.887398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.887410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.887563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.887574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.887824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.887837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.887983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.887996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.888136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.888150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.888317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.888331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.888411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.888422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.888582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.888595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.888680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.888691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.888784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.888797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.888911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.888925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.889009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.889020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.889203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.889216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.889417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.889432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.889614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.889627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.889866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.889879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.890018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.890032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.890180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.890193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.890340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.890354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.890498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.890510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.890725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.890738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.890850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.890864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.891018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.891031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.891186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.891199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.891405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.891419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.891614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.891627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.057 qpair failed and we were unable to recover it. 00:27:30.057 [2024-11-27 08:10:23.891876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.057 [2024-11-27 08:10:23.891891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.891978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.891990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.892218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.892234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.892492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.892505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.892658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.892671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.892847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.892859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.893069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.893081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.893187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.893199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.893342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.893356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.893504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.893518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.893686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.893700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.893840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.893853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.893995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.894009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.894152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.894165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.894314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.894326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.894522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.894536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.894759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.894772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.894996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.895010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.895088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.895100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.895178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.895190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.895353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.895365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.895510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.895524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.895701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.895713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.895858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.895872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.896008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.896021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.896125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.896138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.896356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.896370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.896514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.896527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.058 [2024-11-27 08:10:23.896617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-27 08:10:23.896628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.058 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.896727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.896765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.896877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.896895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.897124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.897143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.897296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.897313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.897435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.897451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.897538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.897553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.897782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.897799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.898033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.898050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.898153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.898170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.898382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.898400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.898659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.898676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.898773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.898790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.899026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.899043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.899265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.899288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.899433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.899448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.899621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.899637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.899800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.899817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.899920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.899936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.900179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.900196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.900291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.900307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.900395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.900409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.900578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.900595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.900732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.900748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.900925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.900942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.901196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.901213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.901380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.901397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.901542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.901559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.901802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.901819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.901992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-27 08:10:23.902009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.059 qpair failed and we were unable to recover it. 00:27:30.059 [2024-11-27 08:10:23.902154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.902171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.902433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.902450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.902592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.902608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.902781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.902798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.903037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.903056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.903197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.903210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.903403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.903416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.903584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.903596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.903697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.903711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.903934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.903961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.904107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.904119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.904304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.904329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.904496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.904512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.904618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.904634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.904743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.904759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.905000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.905018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.905246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.905263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.905512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.905528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.905673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.905688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.905901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.905917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.906134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.906152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.906309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.906327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.906418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.906434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.906517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.906533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.906625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.906642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.906760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.906778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.907027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.907044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.907215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.907228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.907362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.907376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.060 qpair failed and we were unable to recover it. 00:27:30.060 [2024-11-27 08:10:23.907651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.060 [2024-11-27 08:10:23.907664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.907845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.907859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.908011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.908024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.908276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.908292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.908460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.908474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.908551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.908563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.908769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.908783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.909019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.909038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.909246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.909259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.909368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.909386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.909480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.909495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.909705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.909722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.909869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.909885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.910122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.910139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.910349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.910366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.910550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.910567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.910658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.910673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.910772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.910789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.911025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.911042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.911255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.911271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.911362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.911380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.911468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.911484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.911719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.911735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.911994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.912012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.912238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.912256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.912482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.061 [2024-11-27 08:10:23.912498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.061 qpair failed and we were unable to recover it. 00:27:30.061 [2024-11-27 08:10:23.912660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.912676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.912828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.912844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.912979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.912997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.913105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.913121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.913209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.913225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.913328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.913344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.913431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.913446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.913681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.913698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.913842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.913859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.913933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.913953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.914046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.914066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.914149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.914164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.914258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.914274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.914372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.914389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.914491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.914507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.914692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.914709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.914918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.914934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.915015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.915031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.915178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.915194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.915431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.915447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.915596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.915609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.915704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.915716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.915875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.915888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.915982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.915995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.916070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.916083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.916161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.916172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.916312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.916325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.916391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.916404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.916479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.916491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.062 [2024-11-27 08:10:23.916565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.062 [2024-11-27 08:10:23.916577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.062 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.916714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.916726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.916812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.916824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.916970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.916984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.917118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.917132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.917210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.917222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.917372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.917385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.917457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.917469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.917566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.917586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.917689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.917706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.917881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.917898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.917994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.918009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.918174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.918191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.918301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.918318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.918403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.918418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.918570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.918587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.918758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.918776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.918871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.918888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.919034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.919051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.919216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.919233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.919387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.919403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.919548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.919564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.919709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.919726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.919893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.919909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.919984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.919996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.920206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.920219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.920299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.920310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.920406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.920418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.920561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.920574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.063 [2024-11-27 08:10:23.920655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.063 [2024-11-27 08:10:23.920667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.063 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.920815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.920828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.920915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.920927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.921013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.921026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.921170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.921182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.921458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.921471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.921560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.921578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.921747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.921763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.921917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.921934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.922037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.922054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.922263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.922280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.922421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.922437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.922601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.922618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.922767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.922783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.922965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.922982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.923120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.923136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.923320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.923336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.923585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.923602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.923828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.923844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.924010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.924027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.924195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.924212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.924451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.924468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.924562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.924577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.924729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.924745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.924841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.924857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.925009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.925023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.925093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.925108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.925252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.925264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.925400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.925413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.925499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.064 [2024-11-27 08:10:23.925509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.064 qpair failed and we were unable to recover it. 00:27:30.064 [2024-11-27 08:10:23.925643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.925656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.925809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.925822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.925976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.925990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.926201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.926219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.926440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.926456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.926553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.926570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.926744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.926761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.926908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.926923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.927063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.927079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.927259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.927275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.927370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.927386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.927621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.927637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.927733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.927750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.927957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.927976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.928135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.928151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.928261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.928277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.928424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.928441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.928605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.928622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.928800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.928816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.928990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.929007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.929152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.929168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.929332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.929349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.929494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.929509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.929676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.929692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.929840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.929856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.930090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.930107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.930362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.930378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.930542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.930558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.930719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.065 [2024-11-27 08:10:23.930735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.065 qpair failed and we were unable to recover it. 00:27:30.065 [2024-11-27 08:10:23.930813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.930830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.931062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.931080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.931347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.931364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.931515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.931532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.931746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.931762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.931944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.931974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.932203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.932219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.932456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.932472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.932687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.932704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.932854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.932870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.933085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.933103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.933257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.933275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.933359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.933374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.933573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.933590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.933771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.933789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.933964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.933983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.934240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.934258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.934426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.934442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.934590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.934606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.934813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.934830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.935070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.935086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.935304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.935320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.935483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.935500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.935734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.935751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.935840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.935854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.935963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.935981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.936160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.936176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.936320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.936337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.936497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.936514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.066 qpair failed and we were unable to recover it. 00:27:30.066 [2024-11-27 08:10:23.936678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.066 [2024-11-27 08:10:23.936695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.936849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.936865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.937075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.937091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.937328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.937345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.937603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.937620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.937782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.937799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.938012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.938029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.938260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.938277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.938376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.938392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.938548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.938565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.938775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.938791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.938951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.938968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.939090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.939105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.939345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.939364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.939511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.939528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.939688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.939704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.939857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.939874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.940084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.940102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.940256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.940272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.940519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.940535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.940732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.940749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.940988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.941005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.941165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.941182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.067 [2024-11-27 08:10:23.941362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.067 [2024-11-27 08:10:23.941377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.067 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.941558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.941574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.941720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.941735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.941995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.942013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.942226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.942243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.942508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.942525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.942606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.942621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.942799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.942815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.943049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.943066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.943279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.943295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.943454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.943469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.943701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.943717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.943901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.943917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.944059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.944075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.944301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.944316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.944420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.944436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.944583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.944598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.944765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.944784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.944926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.944943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.945050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.945065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.945211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.945228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.945370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.945387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.945548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.945565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.945829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.945846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.945999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.946016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.946175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.946191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.946352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.946368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.946465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.946480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.946667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-27 08:10:23.946683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.068 qpair failed and we were unable to recover it. 00:27:30.068 [2024-11-27 08:10:23.946894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.946910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.947117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.947134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.947300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.947317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.947563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.947580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.947741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.947757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.947975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.947992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.948219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.948234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.948469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.948485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.948693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.948711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.948802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.948818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.948995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.949012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.949191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.949207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.949358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.949375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.949536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.949553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.949701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.949718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.949870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.949886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.950117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.950135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.950280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.950296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.950452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.950468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.950702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.950719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.950885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.950901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.951134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.951151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.951298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.951315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.951529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.951545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.951764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.951780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.951958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.951975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.952176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.952193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.952291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.952307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.952523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.952540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.952794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.952813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.952908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-27 08:10:23.952923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.069 qpair failed and we were unable to recover it. 00:27:30.069 [2024-11-27 08:10:23.953154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.953173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.953349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.953366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.953533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.953550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.953774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.953791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.954008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.954026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.954185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.954202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.954355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.954371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.954628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.954645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.954791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.954807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.954917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.954934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.955123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.955139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.955291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.955307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.955390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.955406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.955515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.955530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.955752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.955769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.956052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.956069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.956279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.956296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.956442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.956458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.956622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.956639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.956897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.956914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.957029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.957046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.957215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.957231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.957316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.957333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.957564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.957580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.957794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.957811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.958068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.958089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.958299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.958316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.958479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.958495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.958722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.958739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.070 qpair failed and we were unable to recover it. 00:27:30.070 [2024-11-27 08:10:23.958835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.070 [2024-11-27 08:10:23.958850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.958960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.958977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.959232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.959248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.959427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.959443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.959652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.959668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.959853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.959871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.960015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.960032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.960192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.960209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.960445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.960462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.960669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.960686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.960779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.960796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.960945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.960967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.961169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.961185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.961429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.961442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.961668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.961681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.961782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.961794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.961964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.961978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.962195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.962207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.962410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.962423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.962682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.962696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.962809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.962823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.963023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.963037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.963182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.963196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.963351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.963372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.963604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.963619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.963780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.963794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.963933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.963951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.964186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.964199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.964422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.964437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.071 [2024-11-27 08:10:23.964611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.071 [2024-11-27 08:10:23.964624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.071 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.964705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.964718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.964869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.964882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.965085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.965100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.965253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.965272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.965366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.965378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.965611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.965625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.965777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.965792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.966041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.966055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.966156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.966168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.966381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.966394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.966551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.966565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.966785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.966799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.967026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.967039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.967210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.967224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.967323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.967335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.967555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.967568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.967729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.967741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.967896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.967910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.968090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.968104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.968187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.968199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.968421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.968457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.968685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.968708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.968970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.968997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.969153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.969170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.969335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.969351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.969581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.969597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.969779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.969795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.970043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.970059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.970294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.072 [2024-11-27 08:10:23.970311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.072 qpair failed and we were unable to recover it. 00:27:30.072 [2024-11-27 08:10:23.970418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.970433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.970581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.970599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.970760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.970776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.971092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.971110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.971268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.971290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.971460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.971477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.971573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.971587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.971690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.971706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.971868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.971884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.972096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.972115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.972300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.972316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.972503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.972519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.972792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.972808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.972909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.972925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.973080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.973097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.973190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.973205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.973363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.973379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.973570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.973588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.973827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.973844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.974017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.974035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.974283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.974306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.974403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.974418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.974589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.974605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.974814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.073 [2024-11-27 08:10:23.974831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.073 qpair failed and we were unable to recover it. 00:27:30.073 [2024-11-27 08:10:23.974990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.975006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.975162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.975179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.975333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.975349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.975613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.975630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.975888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.975904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.976012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.976029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.976210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.976226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.976412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.976434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.976670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.976687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.976945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.976968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.977085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.977102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.977262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.977279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.977459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.977476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.977641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.977658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.977896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.977913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.978148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.978166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.978331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.978347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.978501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.978517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.978703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.978720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.978929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.978952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.979107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.979129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.979359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.979375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.979528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.979545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.979726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.979743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.979993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.980011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.980245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.980262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.980410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.980427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.980672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.980688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.980789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.980805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.981000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.074 [2024-11-27 08:10:23.981018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.074 qpair failed and we were unable to recover it. 00:27:30.074 [2024-11-27 08:10:23.981122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.981137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.981344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.981361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.981545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.981563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.981798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.981814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.981994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.982011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.982202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.982219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.982379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.982395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.982549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.982566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.982833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.982849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.983007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.983024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.983130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.983146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.983292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.983310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.983499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.983515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.983745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.983762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.983932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.983953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.984197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.984214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.984452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.984469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.984672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.984691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.984941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.984963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.985115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.985132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.985324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.985341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.985553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.985570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.985764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.985781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.985992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.986010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.986253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.986271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.986508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.986524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.986616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.986632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.986841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.986858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.987023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.987040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.987212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.987228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.987438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.075 [2024-11-27 08:10:23.987455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.075 qpair failed and we were unable to recover it. 00:27:30.075 [2024-11-27 08:10:23.987616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.987632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.987731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.987748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.987911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.987928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.988165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.988183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.988435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.988451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.988543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.988558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.988794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.988810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.989036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.989053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.989205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.989222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.989429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.989446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.989542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.989558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.989742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.989758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.989972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.989990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.990168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.990189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.990420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.990439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.990616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.990636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.990787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.990803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.990967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.990984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.991151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.991167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.991316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.991334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.991566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.991582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.991841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.991858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.992091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.992108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.992265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.992282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.992440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.992456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.992657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.992675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.992855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.992871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.993103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.993120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.993354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.993371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.993475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.993492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.076 [2024-11-27 08:10:23.993716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.076 [2024-11-27 08:10:23.993732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.076 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.993885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.993902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.994057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.994075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.994285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.994302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.994463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.994479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.994715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.994732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.994940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.994961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.995120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.995137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.995246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.995261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.995445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.995462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.995615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.995632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.995844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.995861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.996091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.996107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.996341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.996358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.996591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.996607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.996713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.996730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.996885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.996903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.997085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.997102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.997333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.997349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.997515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.997532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.997687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.997704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.997870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.997886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.998099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.998115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.998356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.998374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.998569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.998591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.998827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.998844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.999088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.999106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.999268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.999286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.999452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.999468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.999640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.077 [2024-11-27 08:10:23.999656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.077 qpair failed and we were unable to recover it. 00:27:30.077 [2024-11-27 08:10:23.999835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:23.999853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.000077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.000096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.000198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.000215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.000464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.000480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.000737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.000755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.000973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.000990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.001222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.001237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.001457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.001474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.001660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.001677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.001850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.001867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.002092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.002110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.002290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.002307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.002539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.002556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.002767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.002783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.002967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.002983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.003199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.003216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.003308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.003323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.003437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.003451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.003631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.003648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.003836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.003853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.004084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.004101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.004311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.004327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.004499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.004515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.004602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.004617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.004767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.004783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.004880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.004895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.005105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.005121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.005291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.005307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.078 [2024-11-27 08:10:24.005476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.078 [2024-11-27 08:10:24.005494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.078 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.005675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.005691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.005842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.005859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.006063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.006081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.006237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.006254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.006474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.006491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.006649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.006668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.006829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.006845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.007054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.007072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.007328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.007345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.007505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.007521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.007755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.007771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.007922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.007940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.008107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.008124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.008279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.008295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.008379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.008394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.008541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.008557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.008769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.008786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.008952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.008969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.009200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.009218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.009323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.009338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.009521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.009538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.009797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.009814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.079 [2024-11-27 08:10:24.010071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.079 [2024-11-27 08:10:24.010089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.079 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.010241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.010256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.010449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.010466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.010560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.010574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.010735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.010751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.010917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.010934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.011097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.011114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.011268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.011286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.011543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.011559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.011774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.011791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.011956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.011974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.012204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.012221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.012456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.012472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.012709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.012726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.012968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.012984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.013151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.013168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.013337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.013352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.013594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.013610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.013762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.013778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.013884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.013900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.014075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.014092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.014353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.014369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.014572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.014589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.014753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.014772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.014917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.014933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.015038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.015054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.015276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.015293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.015461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.015477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.015690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.015706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.015802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.015818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.016078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.016095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.016263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.080 [2024-11-27 08:10:24.016280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.080 qpair failed and we were unable to recover it. 00:27:30.080 [2024-11-27 08:10:24.016509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.016526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.016762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.016778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.017037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.017055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.017232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.017248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.017513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.017529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.017753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.017769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.017934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.017956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.018059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.018076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.018308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.018325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.018470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.018487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.018648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.018665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.018823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.018839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.019006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.019022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.019175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.019192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.019442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.019460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.019606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.019623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.019878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.019894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.020060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.020076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.020320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.020336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.020442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.020457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.020620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.020638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.020730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.020745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.020890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.020907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.021119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.021137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.021356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.021373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.021605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.021621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.021859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.021875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.021962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.021977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.022173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.022190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.022332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.022347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.022443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.022460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.022638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.022657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.022821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.022836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.023067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.081 [2024-11-27 08:10:24.023083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.081 qpair failed and we were unable to recover it. 00:27:30.081 [2024-11-27 08:10:24.023261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.023277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.023463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.023479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.023646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.023663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.023884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.023901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.024013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.024028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.024179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.024196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.024304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.024321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.024414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.024429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.024637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.024655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.024865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.024882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.025039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.025056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.025307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.025324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.025536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.025553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.025789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.025806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.026072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.026089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.026298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.026315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.026419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.026434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.026615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.026631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.026844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.026862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.027037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.027054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.027321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.027338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.027511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.027528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.027680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.027697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.027867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.027884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.027979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.027996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.028077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.028092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.028332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.028349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.028582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.028599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.028812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.028829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.029022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.029040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.029276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.082 [2024-11-27 08:10:24.029292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.082 qpair failed and we were unable to recover it. 00:27:30.082 [2024-11-27 08:10:24.029467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.029484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.029642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.029658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.029804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.029820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.030031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.030048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.030200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.030216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.030375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.030392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.030549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.030569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.030733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.030749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.030910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.030927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.031115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.031133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.031363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.031381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.031487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.031504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.031669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.031685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.031841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.031858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.031969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.031985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.032200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.032218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.032318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.032333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.032490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.032507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.032663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.032680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.032774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.032788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.033004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.033022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.033176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.033193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.033283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.033297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.033534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.033552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.033646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.033661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.033811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.033827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.034044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.034061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.034262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.034278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.034515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.034531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.034766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.034782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.034993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.035010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.035178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.035196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.035385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.035401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.035645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.035663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.035836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.035853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.083 qpair failed and we were unable to recover it. 00:27:30.083 [2024-11-27 08:10:24.036003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.083 [2024-11-27 08:10:24.036020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.036098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.036114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.036273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.036289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.036479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.036497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.036730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.036747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.036904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.036922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.037169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.037187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.037367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.037383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.037641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.037658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.037834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.037850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.038079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.038096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.038368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.038388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.038619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.038636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.038845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.038862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.039010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.039027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.039109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.039124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.039277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.039294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.039503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.039519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.039762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.039780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.039932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.039953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.040114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.040132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.040311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.040327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.040572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.040590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.040797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.040814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.040972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.040989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.041178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.041194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.041445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.041461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.041686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.041703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.041914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.041931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.042182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.042199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.042304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.042320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.042472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.042487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.084 [2024-11-27 08:10:24.042703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.084 [2024-11-27 08:10:24.042719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.084 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.042957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.042975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.043186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.043202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.043405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.043421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.043597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.043614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.043755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.043771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.043987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.044006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.044215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.044232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.044464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.044481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.044646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.044662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.044898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.044914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.045066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.045084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.045237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.045253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.045338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.045353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.045541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.045557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.045770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.045786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.045975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.045993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.046218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.046236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.046323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.046339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.046552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.046572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.046834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.046850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.047036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.047054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.047311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.047328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.047437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.047455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.047547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.047563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.047773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.047790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.047893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.047910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.048105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.048122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.048280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.048297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.048453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.048470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.048654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.048670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.085 [2024-11-27 08:10:24.048927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.085 [2024-11-27 08:10:24.048944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.085 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.049116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.049133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.049300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.049317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.049464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.049481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.049745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.049762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.049926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.049943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.050101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.050118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.050330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.050346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.050537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.050554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.050858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.050874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.051138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.051156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.051346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.051363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.051503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.051520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.051760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.051778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.051974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.051991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.052258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.052274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.052449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.052466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.052635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.052652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.052865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.052881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.053033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.053049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.053262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.053279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.053376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.053391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.053558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.053575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.053784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.053800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.054078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.054095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.054281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.054299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.054532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.054549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.054655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.054673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.054882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.054902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.055127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.055143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.055302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.055319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.055398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.055414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.055645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.055662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.055873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.055890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.055980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.086 [2024-11-27 08:10:24.055997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.086 qpair failed and we were unable to recover it. 00:27:30.086 [2024-11-27 08:10:24.056161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.056177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.056391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.056408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.056619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.056636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.056857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.056874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.057028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.057046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.057215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.057233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.057330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.057346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.057505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.057523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.057675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.057691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.057854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.057871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.058109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.058126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.058367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.058383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.058596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.058613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.058774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.058792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.059024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.059041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.059242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.059260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.059529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.059546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.059702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.059718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.059964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.059981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.060169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.060186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.060367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.060388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.060646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.060666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.060765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.060777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.060858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.060868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.061102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.061115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.061266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.061280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.061426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.061439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.061592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.061606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.061717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.061730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.061880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.061894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.062127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.062141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.062303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.062319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.062495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.062509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.062732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.062749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.087 [2024-11-27 08:10:24.062928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.087 [2024-11-27 08:10:24.062942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.087 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.063147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.063161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.063323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.063337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.063555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.063569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.063636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.063648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.063905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.063918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.064068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.064082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.064227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.064240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.064440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.064453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.064654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.064668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.064813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.064827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.065059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.065073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.065213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.065226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.065365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.065378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.065532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.065548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.065771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.065784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.066015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.066029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.066113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.066136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.066301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.066314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.066402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.066415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.066510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.066522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.066723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.066735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.066983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.066997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.067227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.067240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.067385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.067398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.067491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.067504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.067763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.067781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.068013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.068030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.068264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.068280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.068492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.068507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.068681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.068698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.068850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.068866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.069028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.069044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.069210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.069231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.069394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.069410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.069567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.088 [2024-11-27 08:10:24.069583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.088 qpair failed and we were unable to recover it. 00:27:30.088 [2024-11-27 08:10:24.069808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.069824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.070000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.070016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.070236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.070253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.070513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.070534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.070766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.070782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.071009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.071025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.071186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.071203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.071362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.071378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.071554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.071571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.071795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.071810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.072001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.072018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.072251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.072267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.072429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.072444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.072596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.072614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.072767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.072784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.072942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.072965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.073143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.073160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.073345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.073362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.073571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.073587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.073683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.073698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.073942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.073966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.074206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.074222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.074367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.074384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.074590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.074606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.074856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.074872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.075091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.075108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.075260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.075276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.075506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.075523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.075733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.075749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.075995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.076011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.076274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.076297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.076447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.076465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.076700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.076717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.089 qpair failed and we were unable to recover it. 00:27:30.089 [2024-11-27 08:10:24.076874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.089 [2024-11-27 08:10:24.076891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.077069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.077086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.077246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.077263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.077473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.077490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.077744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.077761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.077928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.077945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.078188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.078204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.078299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.078314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.078499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.078516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.078706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.078723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.078958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.078976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.079157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.079174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.079321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.079338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.079493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.079510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.079602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.079619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.079863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.079879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.080120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.080138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.080390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.080407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.080685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.080702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.080917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.080934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.081086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.081103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.081291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.081309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.081398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.081414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.081519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.081534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.081769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.081789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.081904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.081920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.082148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.082166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.082312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.082329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.082538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.082554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.082738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.082754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.082854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.082869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.083103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.083121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.083237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.083253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.083480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.083497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.083680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.090 [2024-11-27 08:10:24.083697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.090 qpair failed and we were unable to recover it. 00:27:30.090 [2024-11-27 08:10:24.083925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.083941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.084116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.084133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.084283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.084300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.084562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.084579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.084727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.084745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.084924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.084941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.085182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.085198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.085437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.085455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.085699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.085715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.085896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.085913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.086005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.086021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.086255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.086272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.086448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.086467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.086687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.086701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.086882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.086896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.087049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.087061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.087128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.087143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.087371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.087385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.087617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.087630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.087857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.087871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.088121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.088134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.088333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.088346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.088549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.088562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.088736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.088749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.088894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.088907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.089112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.089126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.089291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.089304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.089524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.089537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.089741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.089755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.089937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.089958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.090105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.090117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.090264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.090278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.090516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.090530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.090684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.090697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.090793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.090804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.091010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.091024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.091208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.091223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.091438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.091451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.091 [2024-11-27 08:10:24.091601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.091 [2024-11-27 08:10:24.091613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.091 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.091792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.091805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.091981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.091998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.092133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.092146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.092283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.092297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.092510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.092529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.092822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.092839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.093003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.093019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.093251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.093268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.093434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.093451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.093687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.093703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.093798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.093814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.094060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.094077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.094256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.094271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.094412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.094427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.094554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.094569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.094748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.094765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.094869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.094886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.095030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.095049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.095211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.095227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.095400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.095416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.095622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.095638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.095713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.095729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.095909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.095925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.096167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.096185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.096415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.096432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.096582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.096599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.096868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.096884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.097066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.097084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.097317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.097334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.097570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.097586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.097746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.097763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.097953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.097971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.098196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.098213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.098365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.098382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.098602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.098618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.098770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.098785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.098994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.099010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.092 qpair failed and we were unable to recover it. 00:27:30.092 [2024-11-27 08:10:24.099175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.092 [2024-11-27 08:10:24.099193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.099401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.099418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.099636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.099653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.099886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.099903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.100064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.100080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.100289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.100306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.100543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.100560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.100671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.100694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.100858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.100877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.100964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.100980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.101190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.101205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.101364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.101377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.101514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.101531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.101737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.101750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.101884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.101898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.102111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.102125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.102361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.102375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.102526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.102540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.102686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.102698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.102850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.102863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.103027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.103043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.103202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.103216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.103437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.103450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.103679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.103691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.103773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.103784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.103931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.103944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.104091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.104104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.104309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.104323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.104460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.104474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.104722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.104735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.104961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.104975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.105134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.105147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.105354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.105367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.105590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.105605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.105697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.105709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.105934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.105957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.106170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.106184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.106337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.093 [2024-11-27 08:10:24.106350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.093 qpair failed and we were unable to recover it. 00:27:30.093 [2024-11-27 08:10:24.106505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.106517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.106615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.106628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.106734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.106747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.106893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.106907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.107044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.107058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.107222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.107235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.107391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.107404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.107498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.107509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.107652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.107665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.107747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.107761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.107831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.107843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.107940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.107960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.108189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.108201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.108295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.108309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.108459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.108472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.108637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.108650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.108790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.108804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.108977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.108991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.109093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.109105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.109269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.109282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.109430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.109445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.109626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.109640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.109827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.109841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.110080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.110094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.110296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.110310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.110547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.110561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.110701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.110713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.110798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.110810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.110955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.110968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.111153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.111168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.111341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.111355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.111495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.111508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.094 qpair failed and we were unable to recover it. 00:27:30.094 [2024-11-27 08:10:24.111720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.094 [2024-11-27 08:10:24.111733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.111939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.111960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.112190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.112203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.112364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.112378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.112557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.112571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.112753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.112765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.112913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.112926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.113156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.113170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.113420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.113432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.113523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.113536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.113636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.113650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.113818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.113832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.113995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.114009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.114105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.114117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.114342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.114356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.114500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.114514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.114661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.114675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.114828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.114844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.114986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.115001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.115153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.115166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.115299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.115313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.115483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.115497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.115644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.115657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.115749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.115768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.115923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.095 [2024-11-27 08:10:24.115939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.095 qpair failed and we were unable to recover it. 00:27:30.095 [2024-11-27 08:10:24.116153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.116169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.116361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.116377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.116474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.116489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.116728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.116745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.116908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.116924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.117082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.117099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.117357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.117374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.117612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.117628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.117787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.117803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.118032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.118049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.118300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.118317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.118462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.118479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.118573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.118588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.118746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.118763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.118856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.118873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.119037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.119055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.119285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.119302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.119461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.119477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.119653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.119671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.119763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.119781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.119891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.119907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.120070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.120086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.120243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.120261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.120502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.120519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.120703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.120719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.120905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.120921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.121079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.121097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.121191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.121207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.121434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.121451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.121595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.121612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.121763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.121780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.121922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.121939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.122032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.122049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.122300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.122317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.122484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.122501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.122736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.122752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.123057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.123075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.123255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.123271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.096 qpair failed and we were unable to recover it. 00:27:30.096 [2024-11-27 08:10:24.123425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.096 [2024-11-27 08:10:24.123443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.097 qpair failed and we were unable to recover it. 00:27:30.097 [2024-11-27 08:10:24.123543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.097 [2024-11-27 08:10:24.123558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.097 qpair failed and we were unable to recover it. 00:27:30.097 [2024-11-27 08:10:24.123769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.097 [2024-11-27 08:10:24.123786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.097 qpair failed and we were unable to recover it. 00:27:30.097 [2024-11-27 08:10:24.123945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.097 [2024-11-27 08:10:24.123967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.097 qpair failed and we were unable to recover it. 00:27:30.097 [2024-11-27 08:10:24.124136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.097 [2024-11-27 08:10:24.124153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.097 qpair failed and we were unable to recover it. 00:27:30.097 [2024-11-27 08:10:24.124334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.097 [2024-11-27 08:10:24.124350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.097 qpair failed and we were unable to recover it. 00:27:30.097 [2024-11-27 08:10:24.124505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.097 [2024-11-27 08:10:24.124523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.097 qpair failed and we were unable to recover it. 00:27:30.097 [2024-11-27 08:10:24.124696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.097 [2024-11-27 08:10:24.124713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.097 qpair failed and we were unable to recover it. 00:27:30.097 [2024-11-27 08:10:24.124884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.097 [2024-11-27 08:10:24.124900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.097 qpair failed and we were unable to recover it. 00:27:30.097 [2024-11-27 08:10:24.125134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.097 [2024-11-27 08:10:24.125152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.097 qpair failed and we were unable to recover it. 00:27:30.097 [2024-11-27 08:10:24.125365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.097 [2024-11-27 08:10:24.125382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.097 qpair failed and we were unable to recover it. 00:27:30.097 [2024-11-27 08:10:24.125597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.097 [2024-11-27 08:10:24.125616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.097 qpair failed and we were unable to recover it. 00:27:30.097 [2024-11-27 08:10:24.125795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.097 [2024-11-27 08:10:24.125813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.097 qpair failed and we were unable to recover it. 00:27:30.097 [2024-11-27 08:10:24.126043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.097 [2024-11-27 08:10:24.126060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.097 qpair failed and we were unable to recover it. 00:27:30.381 [2024-11-27 08:10:24.126254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.381 [2024-11-27 08:10:24.126270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.381 qpair failed and we were unable to recover it. 00:27:30.381 [2024-11-27 08:10:24.126418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.381 [2024-11-27 08:10:24.126435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.381 qpair failed and we were unable to recover it. 00:27:30.381 [2024-11-27 08:10:24.126539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.381 [2024-11-27 08:10:24.126556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.381 qpair failed and we were unable to recover it. 00:27:30.381 [2024-11-27 08:10:24.126788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.381 [2024-11-27 08:10:24.126805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.381 qpair failed and we were unable to recover it. 00:27:30.381 [2024-11-27 08:10:24.126889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.381 [2024-11-27 08:10:24.126905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.381 qpair failed and we were unable to recover it. 00:27:30.381 [2024-11-27 08:10:24.127049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.381 [2024-11-27 08:10:24.127065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.381 qpair failed and we were unable to recover it. 00:27:30.381 [2024-11-27 08:10:24.127303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.381 [2024-11-27 08:10:24.127320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.381 qpair failed and we were unable to recover it. 00:27:30.381 [2024-11-27 08:10:24.127562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.381 [2024-11-27 08:10:24.127580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.381 qpair failed and we were unable to recover it. 00:27:30.381 [2024-11-27 08:10:24.127752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.381 [2024-11-27 08:10:24.127775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.381 qpair failed and we were unable to recover it. 00:27:30.381 [2024-11-27 08:10:24.127891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.381 [2024-11-27 08:10:24.127907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.381 qpair failed and we were unable to recover it. 00:27:30.381 [2024-11-27 08:10:24.128013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.381 [2024-11-27 08:10:24.128026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.381 qpair failed and we were unable to recover it. 00:27:30.381 [2024-11-27 08:10:24.128229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.381 [2024-11-27 08:10:24.128243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.381 qpair failed and we were unable to recover it. 00:27:30.381 [2024-11-27 08:10:24.128488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.381 [2024-11-27 08:10:24.128502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.381 qpair failed and we were unable to recover it. 00:27:30.381 [2024-11-27 08:10:24.128638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.381 [2024-11-27 08:10:24.128650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.381 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.128862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.128875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.129092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.129106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.129268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.129282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.129413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.129425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.129570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.129583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.129830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.129842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.130055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.130070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.130229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.130248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.130352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.130367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.130597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.130610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.130691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.130702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.130874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.130889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.131069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.131083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.131240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.131254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.131410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.131423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.131565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.131579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.131737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.131751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.131898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.131911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.132145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.132158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.132327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.132341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.132566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.132580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.132677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.132690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.132938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.132957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.133093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.133106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.133259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.133276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.133483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.133497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.133593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.133605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.133757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.133771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.133912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.133925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.134074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.134087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.134307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.134321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.134547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.134560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.134832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.134845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.135008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.135023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.382 qpair failed and we were unable to recover it. 00:27:30.382 [2024-11-27 08:10:24.135260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.382 [2024-11-27 08:10:24.135280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.135563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.135580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.135801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.135818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.135974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.135992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.136100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.136116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.136225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.136241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.136346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.136362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.136513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.136530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.136765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.136783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.136952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.136970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.137129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.137145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.137395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.137412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.137592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.137609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.137788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.137806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.138025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.138043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.138195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.138212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.138418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.138434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.138593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.138608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.138715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.138731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.138885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.138902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.139053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.139070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.139306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.139322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.139475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.139493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.139633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.139649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.139792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.139809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.139958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.139975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.140206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.140223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.140392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.140409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.140582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.140599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.140703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.140718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.140979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.140997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.141157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.141174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.141323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.141339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.141572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.141589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.141825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.383 [2024-11-27 08:10:24.141842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.383 qpair failed and we were unable to recover it. 00:27:30.383 [2024-11-27 08:10:24.142052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.142069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.142217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.142235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.142378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.142395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.142605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.142622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.142701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.142716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.142863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.142878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.143080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.143094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.143268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.143281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.143510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.143523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.143669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.143684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.143834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.143847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.144052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.144066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.144284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.144298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.144441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.144454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.144545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.144557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.144694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.144710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.144938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.144956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.145215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.145229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.145334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.145348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.145499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.145512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.145663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.145676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.145877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.145891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.146027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.146040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.146197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.146212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.146437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.146452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.146545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.146557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.146643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.146654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.146756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.146768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.146900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.146912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.147071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.147085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.147221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.147234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.147384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.147396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.147628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.147641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.147845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.147860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.147938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.147958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.148174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.384 [2024-11-27 08:10:24.148187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.384 qpair failed and we were unable to recover it. 00:27:30.384 [2024-11-27 08:10:24.148265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.148281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.148458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.148471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.148628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.148640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.148772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.148785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.148985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.148998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.149104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.149118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.149296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.149308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.149455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.149467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.149547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.149559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.149735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.149749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.149922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.149935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.150089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.150103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.150218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.150231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.150377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.150390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.150590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.150604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.150758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.150773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.150944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.150965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.151154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.151166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.151275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.151288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.151529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.151544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.151690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.151704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.151855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.151868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.152010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.152024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.152193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.152206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.152385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.152398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.152618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.152631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.152752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.152767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.152927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.152940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.153083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.153096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.153327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.153340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.153493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.153507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.153708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.153722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.153878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.153890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.154114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.154128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.154222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.154234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.154380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.154392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.154594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.385 [2024-11-27 08:10:24.154609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.385 qpair failed and we were unable to recover it. 00:27:30.385 [2024-11-27 08:10:24.154747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.154760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.154917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.154930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.155071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.155086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.155247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.155260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.155467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.155480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.155698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.155712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.155845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.155857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.156061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.156075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.156299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.156315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.156415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.156429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.156569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.156582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.156809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.156823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.156968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.156984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.157169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.157182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.157276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.157290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.157357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.157368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.157589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.157602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.157734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.157747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.157884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.157897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.158068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.158082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.158144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.158156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.158236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.158249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.158388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.158401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.158529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.158542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.158675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.158688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.158823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.158835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.158999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.159014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.159154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.159167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.159388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.159401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.159489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.386 [2024-11-27 08:10:24.159501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.386 qpair failed and we were unable to recover it. 00:27:30.386 [2024-11-27 08:10:24.159757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.159770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.159921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.159936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.160039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.160052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.160251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.160263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.160498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.160512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.160588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.160599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.160746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.160759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.161033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.161048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.161249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.161265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.161483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.161498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.161720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.161733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.161896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.161908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.162113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.162127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.162287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.162299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.162489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.162501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.162581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.162592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.162737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.162750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.162959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.162971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.163199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.163211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.163369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.163382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.163540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.163552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.163770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.163783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.164012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.164029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.164298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.164311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.164528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.164541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.164684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.164696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.164898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.164910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.165113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.165127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.165280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.165293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.165366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.165377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.165540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.165551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.165730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.165742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.165888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.165900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.165998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.166010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.166162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.166174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.387 qpair failed and we were unable to recover it. 00:27:30.387 [2024-11-27 08:10:24.166340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.387 [2024-11-27 08:10:24.166351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.166610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.166622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.166834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.166846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.167078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.167092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.167296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.167308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.167463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.167475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.167622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.167634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.167796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.167808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.167961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.167973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.168199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.168212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.168350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.168362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.168513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.168526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.168659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.168671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.168891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.168904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.169109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.169122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.169271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.169283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.169415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.169427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.169512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.169524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.169735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.169747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.169899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.169911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.170112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.170126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.170273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.170285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.170427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.170438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.170590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.170603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.170757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.170770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.170853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.170864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.171019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.171033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.171237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.171254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.171403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.171416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.171622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.171634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.171807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.171820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.172027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.172040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.172288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.172307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.172551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.172563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.172767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.172780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.172928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.172941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.388 qpair failed and we were unable to recover it. 00:27:30.388 [2024-11-27 08:10:24.173120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.388 [2024-11-27 08:10:24.173134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.173315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.173328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.173557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.173570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.173717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.173730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.173953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.173966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.174145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.174157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.174306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.174319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.174447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.174460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.174643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.174655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.174792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.174804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.174893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.174905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.175071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.175084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.175284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.175297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.175450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.175463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.175721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.175732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.175895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.175908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.176153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.176166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.176363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.176375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.176585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.176598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.176778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.176790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.177021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.177034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.177183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.177197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.177286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.177298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.177526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.177538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.177739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.177751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.177960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.177973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.178141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.178153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.178360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.178372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.178616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.178628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.178851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.178864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.179042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.179055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.179197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.179212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.179308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.179320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.179504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.179515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.179615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.179627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.179799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.389 [2024-11-27 08:10:24.179811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.389 qpair failed and we were unable to recover it. 00:27:30.389 [2024-11-27 08:10:24.180032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.180045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.180181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.180194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.180292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.180304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.180562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.180575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.180723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.180735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.180960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.180974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.181139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.181151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.181223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.181234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.181391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.181403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.181557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.181569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.181747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.181760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.181904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.181916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.182076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.182088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.182251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.182263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.182411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.182423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.182576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.182588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.182679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.182690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.182823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.182836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.183058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.183072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.183230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.183243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.183411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.183423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.183497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.183508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.183610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.183622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.183784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.183797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.183966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.183979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.184156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.184169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.184311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.184323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.184455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.184468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.184563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.184574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.184719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.184732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.184868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.184880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.185048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.185061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.185275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.185287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.185364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.185376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.185471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.185482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.185748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.185763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.185851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.390 [2024-11-27 08:10:24.185862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.390 qpair failed and we were unable to recover it. 00:27:30.390 [2024-11-27 08:10:24.186047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.186060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.186284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.186296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.186497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.186509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.186706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.186719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.186939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.186958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.187056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.187068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.187246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.187259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.187425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.187437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.187571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.187583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.187748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.187760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.187971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.187984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.188204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.188216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.188291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.188303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.188396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.188407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.188563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.188576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.188719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.188732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.188933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.188945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.189177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.189189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.189300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.189313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.189407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.189419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.189610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.189623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.189703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.189714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.189915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.189928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.190063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.190075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.190153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.190164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.190345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.190375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.190544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.190561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.190702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.190718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.190809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.190824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.190999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.391 [2024-11-27 08:10:24.191017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.391 qpair failed and we were unable to recover it. 00:27:30.391 [2024-11-27 08:10:24.191270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.191286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.191395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.191411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.191572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.191589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.191674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.191689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.191847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.191864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.191976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.191993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.192085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.192100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.192308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.192326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.192433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.192455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.192684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.192701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.192936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.192958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.193122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.193138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.193292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.193308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.193481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.193497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.193761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.193778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.193956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.193973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.194142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.194158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.194321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.194337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.194440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.194456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.194700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.194717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.194818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.194834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.195000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.195017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.195181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.195198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.195357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.195372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.195531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.195547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.195712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.195728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.195875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.195892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.196113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.196130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.196374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.196390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.196658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.196675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.196816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.196832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.196995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.197013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.197177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.197194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.197354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.197369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.197462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.197478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.197666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.392 [2024-11-27 08:10:24.197694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.392 qpair failed and we were unable to recover it. 00:27:30.392 [2024-11-27 08:10:24.197817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.197834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.197917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.197933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.198129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.198146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.198237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.198254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.198478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.198495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.198667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.198684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.198911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.198927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.199111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.199128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.199238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.199255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.199398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.199414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.199640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.199656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.199893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.199909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.200003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.200019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.200126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.200142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.200299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.200315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.200525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.200541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.200694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.200710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.200855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.200871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.201026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.201043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.201143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.201159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.201373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.201389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.201503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.201519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.201743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.201757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.201919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.201932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.202128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.202141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.202379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.202394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.202612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.202628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.202837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.202851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.202963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.202976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.203165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.203178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.203386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.203400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.203629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.203641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.203792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.203803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.204020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.204034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.204134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.204147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.393 [2024-11-27 08:10:24.204351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.393 [2024-11-27 08:10:24.204363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.393 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.204514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.204527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.204760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.204773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.205001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.205015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.205165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.205178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.205268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.205279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.205383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.205395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.205629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.205642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.205811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.205827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.206049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.206062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.206230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.206242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.206337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.206350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.206508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.206521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.206723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.206736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.206963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.206977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.207137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.207151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.207239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.207251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.207389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.207406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.207669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.207694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.207870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.207888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.208118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.208134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.208222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.208233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.208391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.208404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.208615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.208628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.208765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.208777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.208983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.208996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.209153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.209166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.209301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.209314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.209385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.209397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.209519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.209531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.209620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.209633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.209717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.209732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.210005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.210018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.210169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.210181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.210255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.394 [2024-11-27 08:10:24.210266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.394 qpair failed and we were unable to recover it. 00:27:30.394 [2024-11-27 08:10:24.210419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.210431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.210531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.210544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.210676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.210688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.210862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.210875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.211070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.211083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.211227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.211239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.211399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.211411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.211596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.211608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.211764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.211776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.211893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.211906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.212063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.212077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.212189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.212202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.212357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.212370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.212507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.212519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.212673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.212686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.212767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.212778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.212856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.212868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.212964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.212976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.213070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.213081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.213250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.213264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.213371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.213384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.213519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.213531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.213728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.213740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.213959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.213972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.214185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.214197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.214375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.214387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.214552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.214565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.214705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.214717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.214802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.214813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.214899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.214911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.214995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.215008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.215100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.215111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.215330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.215343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.215545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.215557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.215760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.215773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.215999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.395 [2024-11-27 08:10:24.216013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.395 qpair failed and we were unable to recover it. 00:27:30.395 [2024-11-27 08:10:24.216166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.216184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.216268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.216279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.216434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.216446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.216619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.216632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.216872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.216884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.217022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.217036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.217186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.217199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.217378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.217391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.217571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.217583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.217807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.217819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.217969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.217982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.218192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.218205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.218307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.218320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.218469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.218482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.218709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.218723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.218793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.218804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.218946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.218971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.219133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.219146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.219279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.219292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.219450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.219462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.219698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.219711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.219940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.219959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.220163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.220176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.220406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.220419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.220518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.220530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.220729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.220741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.220890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.220902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.221008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.221021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.221196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.221208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.396 [2024-11-27 08:10:24.221362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.396 [2024-11-27 08:10:24.221374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.396 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.221601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.221614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.221796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.221808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.221974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.221987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.222176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.222189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.222395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.222407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.222573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.222586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.222807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.222820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.222991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.223004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.223204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.223216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.223314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.223325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.223469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.223484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.223703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.223716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.223792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.223805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.223958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.223971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.224130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.224143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.224240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.224252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.224339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.224350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.224422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.224432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.224612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.224624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.224850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.224862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.225123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.225136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.225230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.225242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.225403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.225415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.225519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.225531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.225677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.225689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.225889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.225902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.226080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.226093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.226235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.226248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.226473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.226485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.226582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.226595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.226669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.226681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.226903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.226917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.227131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.227145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.227297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.227310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.227535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.397 [2024-11-27 08:10:24.227548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.397 qpair failed and we were unable to recover it. 00:27:30.397 [2024-11-27 08:10:24.227698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.227711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.227927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.227939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.228197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.228210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.228350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.228362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.228592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.228604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.228760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.228772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.228983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.228996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.229222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.229235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.229321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.229333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.229408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.229420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.229586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.229600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.229750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.229763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.229912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.229924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.230148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.230162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.230310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.230323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.230546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.230561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.230719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.230731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.230809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.230820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.230958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.230972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.231138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.231150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.231286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.231299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.231468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.231480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.231628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.231640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.231838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.231852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.232016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.232029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.232172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.232185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.232327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.232341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.232425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.232436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.232516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.232529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.232753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.232767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.232924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.232937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.233146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.233160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.233298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.233311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.398 [2024-11-27 08:10:24.233448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.398 [2024-11-27 08:10:24.233461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.398 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.233699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.233711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.233847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.233861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.234066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.234080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.234167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.234181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.234328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.234342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.234565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.234578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.234748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.234760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.234845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.234857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.234969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.234982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.235192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.235206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.235433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.235445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.235589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.235602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.235693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.235706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.235854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.235868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.235953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.235965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.236077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.236090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.236248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.236261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.236398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.236412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.236550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.236563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.236637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.236648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.236730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.236741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.236898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.236913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.236993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.237006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.237097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.237110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.237293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.237306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.237397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.237410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.237549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.237578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.237835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.237848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.238076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.238089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.238296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.238309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.238513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.238526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.238753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.238767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.238976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.238990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.239136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.239149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.239302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.239315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.399 [2024-11-27 08:10:24.239517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.399 [2024-11-27 08:10:24.239530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.399 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.239608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.239619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.239765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.239778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.240010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.240023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.240175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.240187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.240393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.240406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.240630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.240643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.240871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.240884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.241034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.241048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.241197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.241209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.241289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.241300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.241501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.241514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.241745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.241758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.241916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.241928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.242134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.242147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.242346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.242358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.242504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.242517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.242658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.242671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.242804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.242817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.242958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.242973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.243097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.243111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.243258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.243271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.243377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.243390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.243526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.243538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.243709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.243722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.243864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.243877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.244097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.244112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.244277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.244289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.244426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.244438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.244621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.244634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.244779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.244802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.245050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.245063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.245286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.245298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.245515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.245528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.245681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.245693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.245782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.245794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.245930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.245943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.246134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.246146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.400 [2024-11-27 08:10:24.246295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.400 [2024-11-27 08:10:24.246307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.400 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.246517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.246531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.246702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.246714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.246801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.246812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.246968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.246980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.247235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.247248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.247396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.247409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.247660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.247673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.247810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.247822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.247892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.247904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.248044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.248058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.248126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.248138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.248269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.248282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.248482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.248494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.248641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.248653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.248891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.248918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.249091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.249109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.249342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.249359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.249565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.249583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.249760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.249777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.250019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.250037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.250199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.250217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.250389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.250405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.250618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.250635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.250793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.250810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.250973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.250989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.251134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.251152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.251327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.251344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.251491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.251511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.251740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.251757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.251903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.251920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.401 qpair failed and we were unable to recover it. 00:27:30.401 [2024-11-27 08:10:24.252092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.401 [2024-11-27 08:10:24.252110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.252258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.252275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.252426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.252442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.252542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.252558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.252765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.252782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.252952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.252966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.253155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.253167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.253393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.253406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.253496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.253508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.253661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.253673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.253832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.253845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.254071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.254084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.254291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.254305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.254463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.254476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.254623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.254635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.254786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.254798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.255074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.255087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.255244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.255256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.255416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.255429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.255632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.255645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.255876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.255888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.256034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.256047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.256255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.256269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.256503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.256516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.256686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.256699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.256908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.256922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.257136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.257149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.257333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.257346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.257571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.257583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.257721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.257734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.257896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.257908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.257995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.258007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.258241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.258254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.258511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.258525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.258673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.258686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.258908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.258921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.259148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.402 [2024-11-27 08:10:24.259161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.402 qpair failed and we were unable to recover it. 00:27:30.402 [2024-11-27 08:10:24.259329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.259344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.259500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.259513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.259668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.259681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.259773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.259786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.260035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.260049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.260213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.260226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.260383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.260395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.260486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.260497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.260737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.260749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.260891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.260904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.261111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.261124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.261333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.261346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.261497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.261511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.261675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.261688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.261856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.261870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.262033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.262045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.262201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.262213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.262429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.262442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.262687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.262699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.262857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.262870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.262960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.262973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.263186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.263199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.263404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.263417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.263634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.263646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.263873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.263887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.264056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.264069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.264246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.264258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.264524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.264536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.264687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.264699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.264897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.264910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.265031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.265059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.265230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.265247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.265402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.265418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.265519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.265536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.265624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.265639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.265781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.403 [2024-11-27 08:10:24.265798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.403 qpair failed and we were unable to recover it. 00:27:30.403 [2024-11-27 08:10:24.265898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.265914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.266071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.266089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.266183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.266199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.266430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.266447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.266551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.266569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.266785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.266802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.266966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.266980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.267073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.267084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.267241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.267254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.267392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.267404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.267616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.267630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.267822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.267835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.267984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.267997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.268099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.268111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.268313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.268325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.268467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.268481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.268567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.268579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.268806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.268818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.268978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.268993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.269148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.269161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.269387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.269400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.269553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.269566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.269790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.269804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.269952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.269964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.270055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.270066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.270210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.270223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.270425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.270438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.270573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.270586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.270730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.270743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.270871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.270885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.271057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.271071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.271274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.271289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.271488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.271501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.271658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.271671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.271810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.271823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.272036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.272050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.272203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.272216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.272364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.272377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.272512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.272525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.272728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.404 [2024-11-27 08:10:24.272741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-11-27 08:10:24.272899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.272911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.273067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.273079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.273278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.273291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.273396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.273409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.273560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.273573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.273684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.273697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.273918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.273931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.274178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.274191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.274343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.274355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.274578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.274592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.274803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.274815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.274909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.274922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.275012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.275024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.275164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.275176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.275330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.275343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.275558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.275571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.275704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.275717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.275888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.275901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.276077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.276090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.276294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.276308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.276417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.276429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.276643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.276656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.276829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.276842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.276988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.277001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.277235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.277247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.277402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.277414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.277605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.277617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.277765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.277777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.277915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.277928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.278149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.278162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.278395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.278409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.278612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.278627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.278899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.278911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.279065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.279079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-11-27 08:10:24.279240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.405 [2024-11-27 08:10:24.279253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.279328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.279340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.279587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.279599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.279759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.279771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.279922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.279935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.280175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.280187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.280260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.280272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.280405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.280419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.280570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.280583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.280807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.280820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.280956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.280970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.281128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.281140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.281363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.281375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.281626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.281639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.281839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.281851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.281997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.282011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.282157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.282170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.282341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.282354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.282583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.282595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.282756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.282768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.282972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.282984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.283204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.283218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.283308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.283319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.283535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.283547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.283714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.283736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.283895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.283912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.284005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.284021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.284194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.284210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.284388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.284405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.284563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.284580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.284807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.284822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.284989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.285003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.285173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.285185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.285339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.285352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.285500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.285513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.285719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.285731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.285815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.285826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.285929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.285943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.286132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.286145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-11-27 08:10:24.286369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.406 [2024-11-27 08:10:24.286381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.286467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.286479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.286583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.286595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.286873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.286886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.287033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.287047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.287200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.287213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.287364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.287376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.287541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.287554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.287757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.287770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.287928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.287941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.288170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.288183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.288339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.288352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.288498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.288511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.288671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.288684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.288882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.288895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.289073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.289087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.289282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.289294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.289398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.289410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.289548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.289560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.289725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.289738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.289874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.289888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.289972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.289983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.290204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.290216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.290364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.290377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.290459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.290471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.290560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.290572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.290733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.290746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.290915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.290930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.291139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.291153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.291367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.291381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.291463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.291474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.291618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.291631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.291783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.291796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.292027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.292041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.292138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.292150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.292353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.292367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.292444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.292455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.292608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.292620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.292836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.292853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.293007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.293020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.293171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.407 [2024-11-27 08:10:24.293183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.407 qpair failed and we were unable to recover it. 00:27:30.407 [2024-11-27 08:10:24.293399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.293413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.293549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.293563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.293781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.293794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.293943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.293959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.294129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.294142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.294393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.294405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.294567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.294579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.294780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.294793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.294958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.294970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.295229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.295242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.295470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.295483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.295630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.295643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.295729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.295740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.295888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.295917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.296079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.296095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.296297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.296308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.296444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.296457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.296548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.296560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.296783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.296796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.296997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.297011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.297230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.297242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.297487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.297500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.297660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.297673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.297870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.297882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.297976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.297988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.298202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.298215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.298352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.298363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.298564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.298576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.298711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.298723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.298969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.298981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.299192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.299204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.299373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.299386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.299628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.299641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.299809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.299822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.300124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.300137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.300275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.300287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.300462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.300475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.300615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.300628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.408 qpair failed and we were unable to recover it. 00:27:30.408 [2024-11-27 08:10:24.300803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.408 [2024-11-27 08:10:24.300817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.300964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.300978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.301068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.301079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.301158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.301169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.301301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.301313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.301447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.301459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.301695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.301707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.301843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.301856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.302108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.302120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.302335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.302348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.302583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.302595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.302796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.302809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.303039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.303052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.303215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.303227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.303384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.303399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.303555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.303567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.303713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.303725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.303954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.303967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.304183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.304195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.304329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.304341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.304498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.304510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.304590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.304602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.304772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.304805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.305008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.305043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.305167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.305198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.305476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.305509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.305727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.305765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.305901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.305934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.306138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.306173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.306464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.306497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.306686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.306702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.306864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.306897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.307084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.307119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.307319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.307353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.307596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.307613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.307847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.307863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.308042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.308059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.308248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.409 [2024-11-27 08:10:24.308281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.409 qpair failed and we were unable to recover it. 00:27:30.409 [2024-11-27 08:10:24.308549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.308583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.308778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.308797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.308966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.309001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.309187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.309220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.309418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.309452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.309713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.309747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.310030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.310064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.310182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.310216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.310329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.310343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.310609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.310643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.310937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.310979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.311178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.311213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.311432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.311465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.311648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.311694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.311868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.311885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.312082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.312095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.312318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.312352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.312531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.312563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.312744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.312788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.312959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.312973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.313137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.313150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.313308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.313321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.313474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.313487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.313658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.313671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.313875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.313888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.314059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.314071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.314150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.314162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.314320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.314333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.314477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.314508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.314794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.314827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.315087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.315122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.315414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.315448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.315723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.315756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.410 [2024-11-27 08:10:24.315999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.410 [2024-11-27 08:10:24.316033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.410 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.316234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.316267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.316546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.316581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.316772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.316784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.316929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.316942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.317083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.317124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.317273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.317307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.317521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.317555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.317748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.317787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.317982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.318016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.318201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.318234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.318446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.318477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.318724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.318757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.318953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.318966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.319138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.319172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.319483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.319516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.319765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.319777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.319866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.319877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.320114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.320149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.320279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.320313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.320559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.320591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.320859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.320894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.321190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.321224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.321483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.321496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.321652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.321686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.321982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.322016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.322205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.322239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.322437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.322470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.322779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.322791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.322933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.322946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.323026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.323037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.323186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.323221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.323345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.323378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.323661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.323695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.323912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.323926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.324031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.324045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.324273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.324306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.324509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.324543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.411 [2024-11-27 08:10:24.324834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.411 [2024-11-27 08:10:24.324867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.411 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.324996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.325030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.325158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.325191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.325467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.325502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.325683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.325717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.325899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.325932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.326129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.326163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.326302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.326333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.326603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.326637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.326884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.326918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.327178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.327253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.327498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.327536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.327787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.327822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.327966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.328001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.328199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.328233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.328546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.328581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.328722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.328757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.328921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.328937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.329172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.329188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.329408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.329442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.329690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.329724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.329990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.330027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.330276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.330311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.330513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.330548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.330819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.330854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.331116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.331152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.331446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.331486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.331586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.331603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.331833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.331867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.332050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.332085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.332202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.332233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.332502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.332537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.332784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.332817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.333011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.333046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.333248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.333282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.333552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.333586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.333734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.333767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.333978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.334021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.334273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.334307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.412 [2024-11-27 08:10:24.334554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.412 [2024-11-27 08:10:24.334570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.412 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.334674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.334691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.334940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.334983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.335187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.335220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.335471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.335506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.335704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.335721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.335875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.335918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.336202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.336237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.336512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.336546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.336743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.336779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.336996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.337026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.337126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.337139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.337368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.337381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.337577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.337590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.337749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.337783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.337920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.337980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.338129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.338164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.338359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.338394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.338595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.338628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.338837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.338871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.339083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.339118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.339304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.339336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.339583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.339619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.339825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.339860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.340075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.340109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.340388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.340420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.340666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.340699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.340900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.340943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.341036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.341048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.341277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.341312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.341523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.341556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.341674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.341707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.341982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.341995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.342164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.342176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.342355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.342389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.342579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.342612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.342808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.342842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.343113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.343128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.343285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.343324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.343513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.343546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.413 qpair failed and we were unable to recover it. 00:27:30.413 [2024-11-27 08:10:24.343815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.413 [2024-11-27 08:10:24.343827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.344042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.344077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.344342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.344375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.344606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.344640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.344845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.344877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.345128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.345163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.345361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.345396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.345603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.345636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.345882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.345916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.346216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.346255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.346513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.346548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.346782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.346814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.347047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.347083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.347332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.347366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.347585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.347619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.347796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.347812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.347924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.347941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.348112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.348156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.348354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.348388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.348621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.348655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.348844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.348861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.349115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.349150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.349362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.349396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.349675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.349709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.349909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.349943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.350231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.350266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.350475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.350509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.350699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.350733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.350980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.350996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.351238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.351273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.351402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.351435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.351751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.351785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.352092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.352110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.352291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.352307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.414 [2024-11-27 08:10:24.352541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-11-27 08:10:24.352557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.414 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.352812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.352854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.353053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.353088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.353287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.353321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.353567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.353607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.353902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.353935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.354103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.354137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.354321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.354354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.354569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.354604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.354852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.354886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.355083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.355118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.355393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.355427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.355626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.355660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.355923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.355967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.356165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.356197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.356440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.356474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.356719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.356756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.356916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.356932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.357112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.357130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.357305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.357321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.357569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.357585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.357759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.357794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.357985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.358019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.358143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.358177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.358424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.358458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.358728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.358744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.358945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.358966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.359060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.359075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.359313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.359348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.359531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.359564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.359824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.359859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.360159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.360194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.360455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.360489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.360701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.360718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.360878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.360912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.361050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.361084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.361362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.361396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.361627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.361644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.361802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.361836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.362089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.362124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.362397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.362432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.362712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.362746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.362941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.362962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.415 [2024-11-27 08:10:24.363200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-11-27 08:10:24.363234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.415 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.363437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.363478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.363618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.363651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.363848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.363864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.364025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.364060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.364203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.364238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.364506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.364540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.364733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.364767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.364900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.364916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.365143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.365178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.365357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.365390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.365567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.365600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.365805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.365821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.366040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.366076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.366389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.366423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.366647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.366680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.366891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.366907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.367117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.367134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.367404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.367420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.367577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.367610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.367858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.367891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.368131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.368166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.368468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.368502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.368753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.368787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.368966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.368984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.369155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.369172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.369361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.369395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.369688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.369722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.369923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.369939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.370175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.370209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.370480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.370514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.370767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.370783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.370993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.371010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.371172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.371206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.371429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.371464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.371761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.371794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.372003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.372038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.372310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.372344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.372565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.372598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.372865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.372900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.373162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.373196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.373406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.373445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.373620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.373636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.373826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.373859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.374113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-11-27 08:10:24.374148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.416 qpair failed and we were unable to recover it. 00:27:30.416 [2024-11-27 08:10:24.374334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.374367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.374555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.374589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.374882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.374915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.375141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.375175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.375426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.375460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.375621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.375655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.375918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.375963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.376186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.376219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.376349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.376383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.376632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.376666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.376874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.376907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.377101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.377136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.377267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.377301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.377596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.377630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.377894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.377910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.378103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.378120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.378265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.378282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.378505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.378539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.378803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.378837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.379019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.379055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.379185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.379219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.379508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.379542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.379702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.379737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.380018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.380053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.380254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.380288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.380480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.380513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.380710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.380743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.380954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.380990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.381237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.381271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.381569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.381602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.381809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.381842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.382063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.382098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.382238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.382271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.382543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.382577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.382706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.382722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.382969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.383004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.383309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.383349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.383605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.383638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.383836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.383880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.383977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.383992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.384157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.384172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.384337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.384354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.384521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.417 [2024-11-27 08:10:24.384537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.417 qpair failed and we were unable to recover it. 00:27:30.417 [2024-11-27 08:10:24.384695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.384727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.384993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.385028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.385224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.385255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.385397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.385430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.385637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.385671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.385882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.385916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.386111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.386147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.386453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.386487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.386658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.386675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.386938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.386984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.387186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.387221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.387425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.387460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.387640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.387674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.387919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.387967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.388204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.388221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.388371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.388388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.388538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.388572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.388839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.388872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.389050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.389086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.389360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.389394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.389649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.389684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.389874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.389909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.390101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.390137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.390414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.390449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.390706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.390740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.390977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.390994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.391150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.391166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.391411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.391428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.391670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.391703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.391963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.391998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.392266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.392301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.392558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.392592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.392805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.392839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.393091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.393133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.418 [2024-11-27 08:10:24.393270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.418 [2024-11-27 08:10:24.393303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.418 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.393553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.393587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.393828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.393845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.394080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.394097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.394330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.394347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.394531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.394548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.394711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.394727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.394918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.394970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.395156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.395190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.395370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.395405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.395554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.395587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.395854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.395889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.396198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.396233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.396491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.396525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.396719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.396753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.396960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.396995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.397244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.397260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.397420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.397436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.397704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.397721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.397885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.397917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.398221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.398294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.398630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.398700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.398918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.398961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.399140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.399157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.399320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.399337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.399551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.399568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.399846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.399886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.400186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.400222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.400481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.400514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.400723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.400757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.401057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.401100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.401355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.401389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.401642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.401681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.401863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.401879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.402163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.402198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.402349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.402382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.402576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.402610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.402808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.402842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.403091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.403125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.403311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.403356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.403633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.403667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.403791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.403824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.403966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.404001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.404204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.404238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.404509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.404548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.404705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.404721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.419 [2024-11-27 08:10:24.404966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.419 [2024-11-27 08:10:24.405002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.419 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.405199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.405233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.405462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.405495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.405690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.405724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.405907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.405941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.406223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.406257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.406540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.406585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.406845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.406882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.407144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.407180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.407328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.407360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.407612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.407646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.407831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.407864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.408078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.408095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.408348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.408381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.408503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.408536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.408810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.408843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.409116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.409133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.409275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.409290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.409531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.409565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.409783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.409814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.410069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.410087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.410309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.410325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.410539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.410555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.410784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.410801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.411010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.411027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.411203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.411220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.411323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.411356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.411561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.411593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.411859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.411892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.412193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.412228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.412490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.412523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.412708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.412724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.412912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.412945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.413234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.413274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.413544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.413577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.413890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.413923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.414136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.414170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.414445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.414478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.414749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.414782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.414982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.414999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.415171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.415205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.415479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.415511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.415783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.415817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.416040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.416075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.416272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.416306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.416565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.416599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.420 [2024-11-27 08:10:24.416892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.420 [2024-11-27 08:10:24.416925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.420 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.417132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.417149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.417390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.417423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.417676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.417709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.417964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.418000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.418269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.418301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.418442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.418475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.418748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.418781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.418981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.419016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.419295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.419329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.419456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.419489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.419731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.419748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.419945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.419989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.420285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.420318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.420673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.420748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.421045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.421086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.421312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.421346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.421617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.421659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.421873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.421890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.422130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.422149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.422333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.422349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.422439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.422454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.422650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.422683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.422884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.422916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.423125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.423161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.423425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.423474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.423726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.423758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.423946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.423992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.424263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.424280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.424515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.424532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.424693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.424710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.424876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.424910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.425144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.425179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.425385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.425417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.425604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.425637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.425855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.425887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.426167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.426184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.426426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.426443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.426605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.426621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.426715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.426730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.427008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.427044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.427269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.427308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.427621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.427654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.427921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.427961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.428107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.428141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.428441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.428475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.428678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.421 [2024-11-27 08:10:24.428710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.421 qpair failed and we were unable to recover it. 00:27:30.421 [2024-11-27 08:10:24.428992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.429026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.429321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.429353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.429588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.429604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.429815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.429831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.429993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.430009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.430195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.430226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.430496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.430531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.430820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.430852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.431099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.431135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.431384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.431418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.431638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.431671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.431866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.431899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.432154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.432171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.432335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.432352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.432514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.432548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.432745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.432777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.433047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.433064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.433153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.433169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.433372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.433404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.433684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.433718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.433903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.433936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.434202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.434219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.434434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.434451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.434626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.434642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.434858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.434891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.435172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.435206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.435398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.435431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.435692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.435726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.436025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.436059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.436247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.436281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.436476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.436508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.436733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.436766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.436894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.436911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.437129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.437163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.437435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.437469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.437765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.437799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.437989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.438024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.438284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.438301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.438538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.438554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.438766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.438782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.439001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.422 [2024-11-27 08:10:24.439018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.422 qpair failed and we were unable to recover it. 00:27:30.422 [2024-11-27 08:10:24.439314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.439347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.439542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.439576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.439860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.439893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.440052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.440086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.440273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.440306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.440495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.440511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.440701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.440717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.440999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.441033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.441295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.441329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.441540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.441575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.441753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.441770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.441937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.441981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.442279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.442312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.442447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.442480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.442752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.442785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.443047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.443065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.443311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.443327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.443545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.443561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.443735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.443753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.443922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.443939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.444183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.444218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.444427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.444466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.444683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.444717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.444986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.445004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.445192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.445209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.445365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.445381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.445606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.445622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.445860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.445893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.446164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.446201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.446484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.446516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.446801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.446834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.447050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.447085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.447334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.447368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.447493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.447526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.447735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.447768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.447994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.448029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.448234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.448251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.448401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.448434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.448661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.448694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.448967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.449002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.449239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.449255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.449526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.449560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.449844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.449877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.450124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.450141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.450371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.450388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.450604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.450620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.450882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.450899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.451117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.451134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.451312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.423 [2024-11-27 08:10:24.451330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.423 qpair failed and we were unable to recover it. 00:27:30.423 [2024-11-27 08:10:24.451413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.451428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.451669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.451702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.451929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.451971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.452170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.452204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.452390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.452423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.452681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.452714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.452931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.452975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.453229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.453263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.453482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.453499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.453690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.453724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.453979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.454016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.454217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.454251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.454385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.454419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.454729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.454770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.454994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.455035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.455194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.455211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.455380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.455414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.455621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.455654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.455863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.455897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.456058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.456094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.456294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.456328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.456516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.456550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.456736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.456771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.457042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.457059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.457348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.457381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.457668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.457701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.457915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.457962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.458298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.458332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.458538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.458572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.458769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.458802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.458994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.459030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.459308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.459346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.459627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.459665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.459939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.459989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.460123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.460141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.460384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.460403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.460670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.460706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.460963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.460982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.461216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.461253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.461504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.461542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.461740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.461784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.462005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.462022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.462181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.462199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.462380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.462398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.462645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.462664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.462854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.462872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.463068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.463086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.463312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.463330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.463553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.463575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.463683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.463703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.463813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.424 [2024-11-27 08:10:24.463831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.424 qpair failed and we were unable to recover it. 00:27:30.424 [2024-11-27 08:10:24.463923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.425 [2024-11-27 08:10:24.463938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.425 qpair failed and we were unable to recover it. 00:27:30.709 [2024-11-27 08:10:24.464126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.709 [2024-11-27 08:10:24.464143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.709 qpair failed and we were unable to recover it. 00:27:30.709 [2024-11-27 08:10:24.464359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.709 [2024-11-27 08:10:24.464375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.709 qpair failed and we were unable to recover it. 00:27:30.709 [2024-11-27 08:10:24.464591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.709 [2024-11-27 08:10:24.464632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.709 qpair failed and we were unable to recover it. 00:27:30.709 [2024-11-27 08:10:24.464936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.709 [2024-11-27 08:10:24.464985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.465246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.465281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.465556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.465570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.465676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.465691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.465943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.465965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.466050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.466062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.466220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.466233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.466464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.466478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.466638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.466651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.466801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.466815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.466909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.466921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.467178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.467192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.467347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.467366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.467540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.467554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.467738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.467752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.467897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.467911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.468006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.468019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.468256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.468270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.468452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.468465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.468561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.468573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.468785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.468799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.469032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.469045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.469143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.469156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.469338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.469350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.469514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.469527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.469683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.469697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.469939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.469994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.470159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.470194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.470338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.470373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.470633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.470666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-11-27 08:10:24.470803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.710 [2024-11-27 08:10:24.470836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.471118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.471154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.471412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.471448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.471756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.471790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.471997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.472033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.472246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.472283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.472360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.472372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.472562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.472596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.472737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.472770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.473080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.473129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.473463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.473506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.473707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.473743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.473965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.474003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.474215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.474249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.474449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.474484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.474701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.474738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.474936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.474959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.475133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.475150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.475294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.475311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.475464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.475481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.475752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.475769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.475959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.475978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.476173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.476195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.476347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.476364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.476549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.476567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.476770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.476806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.477034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.477070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.477188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.477205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.477403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.477437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.477712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.477748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.711 [2024-11-27 08:10:24.478006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.711 [2024-11-27 08:10:24.478042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.711 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.478239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.478274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.478550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.478585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.478795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.478828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.479074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.479092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.479264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.479282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.479477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.479510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.479742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.479776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.479982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.480000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.480254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.480289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.480433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.480467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.480750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.480784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.480935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.480984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.481185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.481220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.481344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.481381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.481506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.481539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.481728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.481762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.482046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.482083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.482343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.482377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.482684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.482725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.482995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.483013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.483113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.483129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.483351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.483387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.483642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.483679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.483971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.484006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.484208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.484244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.484526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.484560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.484817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.484851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.485005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.485042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.485184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.485202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.485367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.712 [2024-11-27 08:10:24.485403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.712 qpair failed and we were unable to recover it. 00:27:30.712 [2024-11-27 08:10:24.485690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.485726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.485872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.485920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.486193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.486209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.486323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.486357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.486651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.486687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.486982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.487001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.487153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.487170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.487338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.487373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.487653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.487689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.487960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.487998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.488186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.488222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.488429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.488464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.488673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.488707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.489002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.489021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.489259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.489276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.489432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.489448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.489685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.489703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.489901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.489936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.490236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.490271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.490462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.490498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.490704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.490738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.491024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.491043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.491221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.491239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.491428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.491464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.491653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.491687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.491982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.492017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.492166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.492201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.492456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.492491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.492849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.492925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.493222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.713 [2024-11-27 08:10:24.493297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.713 qpair failed and we were unable to recover it. 00:27:30.713 [2024-11-27 08:10:24.493616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.493655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.493887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.493923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.494200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.494218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.494437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.494454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.494574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.494592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.494694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.494710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.494965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.495002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.495203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.495239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.495503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.495537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.495695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.495730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.495916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.495934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.496163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.496205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.496428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.496462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.496607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.496641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.496846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.496881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.497110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.497147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.497339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.497374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.497574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.497608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.497806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.497840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.498117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.498152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.498408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.498442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.498650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.498684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.498973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.499008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.499285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.499320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.499606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.499639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.499942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.499991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.500286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.500321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.500601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.500635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.500870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.500904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.501181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.714 [2024-11-27 08:10:24.501217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.714 qpair failed and we were unable to recover it. 00:27:30.714 [2024-11-27 08:10:24.501474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.501491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.501654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.501671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.501928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.501975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.502176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.502209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.502465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.502500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.502708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.502743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.502962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.502997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.503245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.503263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.503571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.503616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.503835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.503874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.504222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.504260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.504471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.504505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.504796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.504832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.505063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.505099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.505416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.505450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.505643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.505676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.505875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.505910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.506123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.506159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.506341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.506359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.506651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.506687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.506962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.506998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.507252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.507271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.507431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.507449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.507723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.507756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.507968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.715 [2024-11-27 08:10:24.508004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.715 qpair failed and we were unable to recover it. 00:27:30.715 [2024-11-27 08:10:24.508211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.508245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.508522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.508556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.508811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.508846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.509044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.509062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.509219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.509236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.509457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.509490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.509819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.509854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.510131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.510149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.510253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.510271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.510507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.510524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.510708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.510729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.510902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.510919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.511144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.511162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.511341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.511359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.511552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.511586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.511841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.511874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.512146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.512166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.512331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.512348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.512459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.512492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.512695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.512727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.513014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.513033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.513156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.513172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.513328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.513345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.513500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.513533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.513829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.513863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.514052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.514100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.514292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.514308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.514468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.514485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.514710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.514743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.514903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.514938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.716 [2024-11-27 08:10:24.515133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.716 [2024-11-27 08:10:24.515168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.716 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.515366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.515398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.515599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.515632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.515912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.515962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.516259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.516292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.516549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.516583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.516880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.516912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.517179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.517196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.517388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.517422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.517706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.517741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.518022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.518058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.518244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.518279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.518476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.518511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.518791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.518827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.519051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.519070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.519310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.519328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.519490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.519507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.519676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.519694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.519845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.519862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.519975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.519993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.520162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.520196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.520484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.520526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.520733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.520769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.520994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.521030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.521316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.521335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.521571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.521605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.521745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.521780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.522035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.522073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.522298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.522333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.522610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.522645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.522835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.717 [2024-11-27 08:10:24.522869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.717 qpair failed and we were unable to recover it. 00:27:30.717 [2024-11-27 08:10:24.523079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.523114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.523391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.523408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.523561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.523578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.523730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.523747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.523911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.523929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.524124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.524141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.524236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.524253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.524454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.524472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.524623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.524640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.524747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.524765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.525000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.525037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.525386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.525422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.525635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.525669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.525935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.525983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.526267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.526314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.526490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.526509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.526667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.526701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.526820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.526866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.527071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.527088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.527207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.527223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.527480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.527514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.527778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.527813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.527935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.527996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.528182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.528217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.528493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.528527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.528807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.528841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.529031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.529067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.529322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.529340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.529582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.529601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.529818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.529835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.529995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.718 [2024-11-27 08:10:24.530014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.718 qpair failed and we were unable to recover it. 00:27:30.718 [2024-11-27 08:10:24.530190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.530223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.530500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.530535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.530658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.530694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.530883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.530917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.531129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.531205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.531430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.531468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.531779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.531816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.532073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.532111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.532262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.532307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.532529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.532549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.532786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.532805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.533046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.533063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.533252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.533269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.533451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.533473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.533649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.533682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.533832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.533864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.534086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.534122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.534343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.534377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.534637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.534683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.534960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.534999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.535205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.535223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.535469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.535488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.535656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.535681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.535918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.535939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.536111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.536129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.536235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.536251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.536427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.536462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.536752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.536786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.536999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.537045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.537266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.537282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.719 qpair failed and we were unable to recover it. 00:27:30.719 [2024-11-27 08:10:24.537392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.719 [2024-11-27 08:10:24.537409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.537651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.537684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.537887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.537922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.538120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.538154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.538347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.538364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.538587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.538622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.538891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.538924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.539218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.539236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.539404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.539421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.539661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.539678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.539859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.539880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.540036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.540054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.540220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.540237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.540400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.540418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.540603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.540637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.540907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.540941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.541232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.541251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.541522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.541540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.541733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.541750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.541943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.541978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.542166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.542183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.542373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.542408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.542569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.542603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.542882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.542924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.543093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.543111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.543269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.543302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.543578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.543612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.543880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.543914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.544210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.544230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.544506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.544541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.544843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.720 [2024-11-27 08:10:24.544878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.720 qpair failed and we were unable to recover it. 00:27:30.720 [2024-11-27 08:10:24.545075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.545093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.545245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.545262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.545457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.545475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.545632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.545651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.545820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.545854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.546062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.546079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.546345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.546383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.546697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.546731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.546946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.546991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.547313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.547347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.547603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.547638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.547865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.547899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.548178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.548195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.548296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.548312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.548545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.548563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.548728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.548744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.548980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.548998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.549084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.549101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.549269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.549287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.549449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.549484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.549763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.549797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.550003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.550039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.550247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.550264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.550438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.550474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.550681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.550715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.721 [2024-11-27 08:10:24.550980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.721 [2024-11-27 08:10:24.551015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.721 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.551222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.551257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.551446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.551482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.551700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.551733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.551972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.552008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.552281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.552316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.552514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.552532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.552755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.552789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.553047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.553090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.553308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.553325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.553571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.553587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.553779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.553816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.554094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.554130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.554314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.554331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.554492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.554541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.554742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.554777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.554966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.555001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.555200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.555236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.555433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.555450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.555617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.555651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.555877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.555912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.556115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.556148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.556424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.556442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.556614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.556631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.556873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.556891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.557086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.557105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.557380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.557413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.557624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.557659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.557988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.558024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.558228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.558263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.558458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.558474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.722 qpair failed and we were unable to recover it. 00:27:30.722 [2024-11-27 08:10:24.558701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.722 [2024-11-27 08:10:24.558736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.558903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.558937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.559235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.559270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.559537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.559571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.559777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.559809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.560021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.560057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.560250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.560268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.560447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.560480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.560761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.560795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.561035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.561071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.561279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.561312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.561449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.561467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.561649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.561667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.561918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.561962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.562222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.562258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.562446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.562465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.562585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.562602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.562788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.562807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.563032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.563055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.563284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.563318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.563599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.563633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.563882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.563915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.564324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.564401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.564710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.564750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.564985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.565022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.565233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.565269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.565534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.565566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.565833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.565867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.566006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.566042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.566253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.566289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.566407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.723 [2024-11-27 08:10:24.566451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.723 qpair failed and we were unable to recover it. 00:27:30.723 [2024-11-27 08:10:24.566699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.566737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.566971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.567007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.567172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.567207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.567489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.567506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.567646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.567663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.567880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.567900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.568079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.568098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.568306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.568340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.568486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.568521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.568819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.568855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.569055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.569093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.569371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.569388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.569623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.569661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.569832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.569867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.570078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.570113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.570377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.570414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.570698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.570734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.571023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.571060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.571284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.571319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.571547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.571582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.571802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.571838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.572068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.572104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.572304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.572324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.572550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.572568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.572674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.572692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.572966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.572984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.573185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.573203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.573382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.573405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.573532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.573567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.573855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.573892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.724 [2024-11-27 08:10:24.574225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.724 [2024-11-27 08:10:24.574263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.724 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.574416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.574450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.574652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.574687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.574887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.574923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.575151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.575169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.575265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.575283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.575531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.575571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.575849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.575886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.576169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.576205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.576344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.576363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.576546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.576581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.576806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.576840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.577060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.577098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.577348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.577387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.577623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.577660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.577784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.577818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.578013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.578050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.578200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.578219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.578438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.578474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.578705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.578738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.578878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.578915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.579187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.579223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.579417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.579456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.579589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.579607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.579867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.579903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.580103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.580121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.580219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.580236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.580470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.580488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.580572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.580588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.580802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.580836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.581026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.725 [2024-11-27 08:10:24.581064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.725 qpair failed and we were unable to recover it. 00:27:30.725 [2024-11-27 08:10:24.581349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.581386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.581656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.581692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.581835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.581871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.581999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.582036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.582187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.582221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.582416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.582451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.582683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.582723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.582917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.582966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.583756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.583786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.583969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.583989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.584240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.584257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.584483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.584501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.584666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.584685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.584924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.584942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.585058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.585074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.585250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.585270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.585455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.585475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.585732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.585751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.585954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.585973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.586213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.586231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.586462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.586479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.586715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.586733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.586913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.586931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.587070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.587089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.587334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.587353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.587531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.587549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.587695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.587713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.587883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.587900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.588158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.726 [2024-11-27 08:10:24.588178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.726 qpair failed and we were unable to recover it. 00:27:30.726 [2024-11-27 08:10:24.588434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.588453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.588719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.588737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.588833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.588850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.589008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.589028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.589259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.589277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.589469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.589488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.589755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.589790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.590053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.590091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.590363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.590380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.590575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.590593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.590789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.590806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.591077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.591095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.591251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.591270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.591543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.591561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.591825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.591842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.592064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.592085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.592322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.592340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.592557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.592579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.592809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.592826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.593012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.593030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.593280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.593298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.593477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.593496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.593768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.593785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.593943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.593969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.594210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.594227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.727 [2024-11-27 08:10:24.594406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.727 [2024-11-27 08:10:24.594424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.727 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.594656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.594675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.594904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.594922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.595186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.595204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.595405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.595425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.595653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.595672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.595940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.596006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.596235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.596269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.596462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.596480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.596617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.596636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.596838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.596871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.597080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.597117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.597408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.597425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.597581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.597599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.597771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.597788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.597992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.598029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.598177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.598211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.598417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.598465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.598694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.598711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.598937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.598984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.599178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.599194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.599378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.599415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.599573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.599610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.599894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.599930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.600199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.600236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.600527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.600564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.600860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.600898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.601072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.601110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.601274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.601287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.601401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.601439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.601650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.728 [2024-11-27 08:10:24.601683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.728 qpair failed and we were unable to recover it. 00:27:30.728 [2024-11-27 08:10:24.601824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.601861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.602093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.602139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.602402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.602438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.602701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.602737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.603025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.603064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.603273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.603312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.603477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.603492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.603713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.603749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.603939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.603981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.604286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.604321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.604463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.604498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.604730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.604764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.605030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.605066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.605331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.605366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.605562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.605597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.605892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.605927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.606211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.606246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.606492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.606528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.606840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.606874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.607091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.607127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.607345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.607380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.607606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.607620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.607859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.607872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.608011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.608025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.608194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.608208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.608360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.608375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.608555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.608589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.608802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.608837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.609040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.609077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.609288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.609302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.729 [2024-11-27 08:10:24.609530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.729 [2024-11-27 08:10:24.609567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.729 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.609858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.609893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.610113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.610151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.610418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.610453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.610648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.610682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.610967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.611004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.611283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.611298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.611443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.611457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.611605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.611619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.611814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.611849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.612064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.612100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.612292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.612333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.612547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.612582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.612757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.612770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.612934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.612952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.613200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.613233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.613427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.613462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.613769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.613804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.614000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.614037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.614241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.614277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.614477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.614492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.614696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.614711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.614961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.614975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.615121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.615134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.615211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.615223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.615443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.615457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.615685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.615698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.615863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.615879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.616120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.616134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.616359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.616393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.616535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.730 [2024-11-27 08:10:24.616568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.730 qpair failed and we were unable to recover it. 00:27:30.730 [2024-11-27 08:10:24.616776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.616811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.617029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.617043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.617248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.617283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.617567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.617601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.617876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.617909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.618120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.618156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.618358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.618401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.618619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.618632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.618781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.618795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.619022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.619057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.619201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.619235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.619439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.619473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.619749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.619784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.620051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.620086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.620350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.620383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.620611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.620647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.620839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.620872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.621182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.621220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.621503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.621537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.621808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.621843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.622098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.622141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.622435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.622469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.622697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.622731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.623016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.623053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.623336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.623370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.623603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.623637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.623844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.623880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.624150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.624186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.624394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.624408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.624611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.624645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-11-27 08:10:24.624912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.731 [2024-11-27 08:10:24.624968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.625185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.625218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.625498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.625534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.625815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.625849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.626065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.626102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.626249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.626283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.626493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.626507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.626772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.626806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.627028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.627064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.627274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.627288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.627543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.627580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.627862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.627895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.628107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.628142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.628458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.628492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.628773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.628788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.628943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.628965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.629129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.629142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.629234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.629247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.629365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.629377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.629606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.629620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.629855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.629869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.630093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.630108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.630299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.630313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.630466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.630502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.630766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.630803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.631028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.732 [2024-11-27 08:10:24.631044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-11-27 08:10:24.631303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.631339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.631621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.631658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.631868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.631904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.632177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.632213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.632499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.632540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.632818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.632852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.633118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.633155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.633452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.633466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.633705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.633719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.633828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.633842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.633916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.633929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.634181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.634217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.634410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.634444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.634703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.634736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.634936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.634982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.635188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.635222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.635424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.635458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.635685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.635718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.635916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.635963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.636250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.636284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.636495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.636529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.636786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.636799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.637042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.637056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.637225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.637238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.637475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.637488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.637647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.637660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.637863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.637897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.638116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.638152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.638355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.638389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.638601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.638635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.733 [2024-11-27 08:10:24.638850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.733 [2024-11-27 08:10:24.638884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.733 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.639195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.639230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.639472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.639507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.639770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.639805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.640099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.640134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.640334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.640368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.640650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.640685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.640819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.640853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.641087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.641123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.641358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.641405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.641640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.641654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.641813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.641826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.641996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.642011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.642294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.642328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.642537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.642578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.642857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.642890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.643059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.643093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.643249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.643283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.643568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.643602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.643907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.643940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.644260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.644295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.644511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.644545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.644805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.644840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.645033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.645069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.645248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.645262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.645511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.645545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.645692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.645725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.645995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.646032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.646303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.646337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.646573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.646607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.646891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.734 [2024-11-27 08:10:24.646925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.734 qpair failed and we were unable to recover it. 00:27:30.734 [2024-11-27 08:10:24.647174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.647209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.647401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.647435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.647719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.647732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.647999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.648013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.648113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.648125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.648364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.648398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.648703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.648737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.648975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.649012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.649222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.649257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.649477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.649490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.649739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.649752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.649987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.650023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.650289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.650325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.650545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.650559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.650829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.650863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.651075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.651111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.651303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.651336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.651639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.651673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.651906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.651940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.652161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.652195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.652413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.652427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.652596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.652630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.652839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.652873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.653092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.653129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.653418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.653452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.653731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.653765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.654059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.654095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.654366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.654379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.654595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.654609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.654780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.654814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.735 qpair failed and we were unable to recover it. 00:27:30.735 [2024-11-27 08:10:24.655039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.735 [2024-11-27 08:10:24.655075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.655262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.655275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.655462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.655496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.655794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.655828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.656102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.656136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.656372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.656406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.656622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.656636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.656856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.656869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.657144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.657178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.657389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.657423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.657644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.657678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.657939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.657983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.658278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.658312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.658573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.658599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.658906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.658940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.659248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.659283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.659546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.659580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.659811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.659846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.660141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.660176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.660390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.660403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.660616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.660633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.660860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.660874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.661047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.661084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.661290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.661325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.661544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.661578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.661875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.661890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.662005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.662017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.662160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.662172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.662337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.662350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.662520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.736 [2024-11-27 08:10:24.662553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.736 qpair failed and we were unable to recover it. 00:27:30.736 [2024-11-27 08:10:24.662757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.662792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.663101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.663138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.663388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.663402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.663619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.663633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.663850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.663864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.664108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.664122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.664347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.664361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.664508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.664522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.664753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.664788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.665073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.665110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.665326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.665361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.665651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.665685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.665972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.666008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.666288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.666323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.666536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.666570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.666763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.666797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.666999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.667034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.667270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.667304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.667442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.667477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.667717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.667751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.668036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.668072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.668358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.668392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.668677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.668711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.668908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.668942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.669146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.669180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.669395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.669429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.669626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.669640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.669818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.669854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.670122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.670158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.670466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.670500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-11-27 08:10:24.670796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.737 [2024-11-27 08:10:24.670836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.671108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.671144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.671401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.671414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.671589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.671623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.671836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.671869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.672151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.672187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.672415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.672450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.672671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.672684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.672787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.672830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.673094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.673131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.673452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.673486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.673745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.673779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.674009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.674045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.674308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.674343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.674668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.674703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.674984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.675019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.675274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.675308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.675559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.675573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.675823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.675858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.676052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.676088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.676297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.676331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.676542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.676576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.676786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.676819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.677081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.677117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.677318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.677356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.677524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.677538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-11-27 08:10:24.677645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.738 [2024-11-27 08:10:24.677658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.677973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.678009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.678247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.678285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.678583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.678618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.678817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.678831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.679020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.679034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.679186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.679199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.679363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.679377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.679479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.679511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.679772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.679806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.679993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.680029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.680317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.680359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.680524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.680538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.680776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.680790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.681057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.681075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.681235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.681248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.681424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.681438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.681589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.681603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.681858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.681871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.682120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.682156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.682385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.682420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.682706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.682740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.682978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.683016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.683144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.683177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.683417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.683452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.683739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.683775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.683988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.684025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.684318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.684352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.684551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.684585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.684868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.684903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-11-27 08:10:24.685221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.739 [2024-11-27 08:10:24.685257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.685565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.685600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.685814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.685848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.686078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.686114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.686336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.686350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.686599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.686613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.686827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.686841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.687063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.687079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.687260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.687271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.687494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.687529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.687769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.687802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.687999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.688034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.688170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.688203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.688528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.688542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.688646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.688661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.688836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.688871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.689076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.689111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.689328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.689363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.689578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.689592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.689845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.689879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.690084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.690119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.690380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.690394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.690666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.690700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.690972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.691008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.691299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.691349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.691587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.691600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.691758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.691771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.691934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.691955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.692226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.692240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.692401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.692414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.692571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.692597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.692767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.692781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.693018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.740 [2024-11-27 08:10:24.693032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.740 qpair failed and we were unable to recover it. 00:27:30.740 [2024-11-27 08:10:24.693118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.693131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.693399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.693413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.693619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.693632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.693792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.693806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.694011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.694051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.694253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.694288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.694600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.694634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.694918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.694963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.695287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.695330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.695492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.695506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.695724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2607590 Killed "${NVMF_APP[@]}" "$@" 00:27:30.741 [2024-11-27 08:10:24.695740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.695917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.695930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.696156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.696170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.696329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.696342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:30.741 [2024-11-27 08:10:24.696558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.696575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.696738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.696773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:30.741 [2024-11-27 08:10:24.697016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.697053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.697256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:30.741 [2024-11-27 08:10:24.697292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.697556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.697592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.697808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.697842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.741 [2024-11-27 08:10:24.697969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.698006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.698285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.698319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.698587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.698601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.698759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.698773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.698923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.699074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.699290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.699328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.699615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.699628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.699717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.699729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.699886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.699929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.700190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.741 [2024-11-27 08:10:24.700225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.741 qpair failed and we were unable to recover it. 00:27:30.741 [2024-11-27 08:10:24.700527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.700562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.700828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.700864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.701131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.701168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.701430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.701465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.701753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.701789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.702068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.702104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.702306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.702339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.702542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.702555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.702678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.702715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.703028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.703065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.703283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.703317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.703590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.703629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.703870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.703884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.704093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.704128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.704422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.704457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.704706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.704742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2608310 00:27:30.742 [2024-11-27 08:10:24.704975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.705013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.705233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2608310 00:27:30.742 [2024-11-27 08:10:24.705271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:30.742 [2024-11-27 08:10:24.705489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.705525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2608310 ']' 00:27:30.742 [2024-11-27 08:10:24.705721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.705738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.742 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.742 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.742 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.742 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.742 [2024-11-27 08:10:24.708175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.708211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.708527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.708551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.708738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.708756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.708918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.708935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.709166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.709184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.709391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.709409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.709607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.709624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.709741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.709754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.742 [2024-11-27 08:10:24.709881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.742 [2024-11-27 08:10:24.709900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.742 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.710151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.710171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.710277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.710294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.710470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.710489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.710716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.710733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.710844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.710857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.711046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.711064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.711243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.711260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.711469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.711486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.711658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.711677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.711879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.711895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.712109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.712127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.712306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.712323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.712447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.712462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.712642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.712660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.712754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.712770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.712969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.712987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.713244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.713264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.713363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.713376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.713486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.713504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.713681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.713697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.713867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.713884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.714052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.714068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.714337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.714357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.714490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.714506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.714703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.714719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.714888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.714905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.715106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.715123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.743 qpair failed and we were unable to recover it. 00:27:30.743 [2024-11-27 08:10:24.715247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.743 [2024-11-27 08:10:24.715265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.715439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.715455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.715678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.715694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.717966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.717998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.718260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.718279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.718464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.718481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.718738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.718757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.718888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.718903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.719136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.719156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.719325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.719344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.719477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.719493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.719599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.719614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.719733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.719748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.719986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.720006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.720177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.720194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.720349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.720365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.720470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.720486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.720637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.720650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.720805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.720820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.720923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.720938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.721184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.721201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.721401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.721415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.721673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.721686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.721859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.721877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.721987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.722002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.722164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.722180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.722306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.722319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.722473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.722488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.722658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.722673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.722895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.722910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.723114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.723130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.723285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.723303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.723403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.744 [2024-11-27 08:10:24.723417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.744 qpair failed and we were unable to recover it. 00:27:30.744 [2024-11-27 08:10:24.723617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.723631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.723800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.723813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.723974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.723990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.724170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.724184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.724382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.724397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.724477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.724489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.724704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.724719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.724865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.724878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.725026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.725040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.725186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.725201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.725455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.725471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.725633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.725649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.727020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.727046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.727181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.727199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.727374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.727390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.727586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.727606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.727868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.727886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.728072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.728088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.728189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.728203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.728437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.728453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.728663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.728680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.728924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.728942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.729056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.729070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.729156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.729171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.729403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.729418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.729595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.729607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.729828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.729842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.729940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.729961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.730132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.730146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.730243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.730258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.730370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.730383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.730544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.730559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.730722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.730738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.730976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.730990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.745 [2024-11-27 08:10:24.731192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.745 [2024-11-27 08:10:24.731206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.745 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.733196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.733228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.733432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.733451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.733681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.733698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.733810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.733829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.733938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.733961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.734092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.734106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.734269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.734285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.734454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.734468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.734553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.734567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.734780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.734798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.734944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.734969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.735153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.735169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.735279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.735293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.735384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.735397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.735547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.735563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.735775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.735796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.735907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.735922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.736029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.736044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.736136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.736149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.736243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.736257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.736342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.736356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.736508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.736522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.736736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.736752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.736972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.736989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.737168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.737182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.737289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.737315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.737423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.737438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.737527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.737540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.737646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.737660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.737763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.737777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.737882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.737896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.738003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.738016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.739961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.739992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.740201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.740219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.746 qpair failed and we were unable to recover it. 00:27:30.746 [2024-11-27 08:10:24.740304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.746 [2024-11-27 08:10:24.740318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.740532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.740550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.740711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.740729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.740918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.740933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.741203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.741225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.741317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.741330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.741494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.741507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.741658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.741675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.741828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.741843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.742056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.742075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.742228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.742243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.742344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.742357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.742525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.742540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.742635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.742652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.742743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.742757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.742847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.742861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.742957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.742972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.743076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.743088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.743177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.743192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.743351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.743366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.743535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.743552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.743704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.743720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.743800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.743812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.743897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.743911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.744003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.744018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.744158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.744173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.744344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.744360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.744478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.744491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.744656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.744668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.744931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.744977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.745218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.745240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.745339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.745352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.745512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.745529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.745807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.745824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.745936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.745958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.748984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.749014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.749323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.749341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.749589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.747 [2024-11-27 08:10:24.749607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.747 qpair failed and we were unable to recover it. 00:27:30.747 [2024-11-27 08:10:24.749775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.749791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.749902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.749917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.750076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.750090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.750223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.750237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.750431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.750445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.750591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.750605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.750755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.750769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.750926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.750941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.751101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.751115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.751293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.751309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.751402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.751416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.751522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.751539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.751752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.751766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.751862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.751875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.751972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.751985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.752067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.752079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.752289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.752303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.752449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.752462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.752562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.752576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.752727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.752740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.752895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.752910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.753057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.753072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.753282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.753295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.753467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.753480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.753571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.753585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.753751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.753764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.753942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.753965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.754118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.754132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.754302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.754316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.754415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.754428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.754575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.754589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.748 qpair failed and we were unable to recover it. 00:27:30.748 [2024-11-27 08:10:24.754707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.748 [2024-11-27 08:10:24.754720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.754849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.754862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.755006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.755023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.755104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.755120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.755297] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:27:30.749 [2024-11-27 08:10:24.755341] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.749 [2024-11-27 08:10:24.755367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.755381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.755552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.755564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.755659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.755670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.755788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.755797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.755899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.755910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.756069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.756083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.756327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.756340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.756437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.756449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.756553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.756565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.756650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.756662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.756757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.756769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.756861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.756874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.756959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.756972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.757116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.757129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.757358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.757371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.757546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.757562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.757644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.757656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.757863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.757877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.757984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.757996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.758088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.758099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.758268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.758280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.758502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.758514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.758726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.758737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.758816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.758829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.758930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.758942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.759098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.759110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.759341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.759353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.759440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.759452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.761960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.761986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.762249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.762263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.762447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.762462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.762674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.762693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.749 qpair failed and we were unable to recover it. 00:27:30.749 [2024-11-27 08:10:24.762910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.749 [2024-11-27 08:10:24.762931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.763101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.763116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.763255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.763268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.763440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.763453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.763532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.763545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.763698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.763712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.763860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.763873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.763975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.763990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.764081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.764094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.764167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.764180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.764334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.764347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.764506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.764519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.764609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.764621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.764687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.764700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.764849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.764862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.764933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.764951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.765033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.765045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.765199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.765212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.765416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.765428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.765583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.765595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.765676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.765689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.765837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.765849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.766066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.766080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.766242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.766256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.766397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.766409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.766544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.766556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.766802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.766815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.766963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.766975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.767116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.767129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.767215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.767228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.767370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.767382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.767484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.767496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.767563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.767576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.767684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.767696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.767769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.767782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.767874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.767887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.767988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.768001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.750 qpair failed and we were unable to recover it. 00:27:30.750 [2024-11-27 08:10:24.768150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.750 [2024-11-27 08:10:24.768163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.768301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.768313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.768396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.768408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.768607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.768620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.768698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.768710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.768898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.768911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.769078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.769090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.769253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.769265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.769358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.769371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.769509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.769520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.769743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.769755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.769846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.769860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.769999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.770013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.770275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.770288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.770460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.770474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.770621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.770634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.770728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.770740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.770904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.770917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.771101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.771114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.771246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.771259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.771403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.771416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.771578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.771591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.771678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.771690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.771763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.771776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.771846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.771858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.771931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.771944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.772069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.772084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.772155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.772168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.772349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.772362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.772445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.772457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.772599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.772612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.772773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.772785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.772939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.772959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.773180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.773193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.773282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.773294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.773446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.773459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.773555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.773567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.773666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.773679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.751 [2024-11-27 08:10:24.773887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.751 [2024-11-27 08:10:24.773899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.751 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.774055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.774068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.774157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.774171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.774252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.774265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.774346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.774358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.774436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.774449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.774537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.774550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.774625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.774639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.774772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.774785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.774938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.774957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.775103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.775115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.775189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.775201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.775268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.775281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.775373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.775385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.775478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.775490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.775658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.775672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.775813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.775825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.776035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.776048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.776188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.776201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.776288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.776301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.776375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.776387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.776459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.776472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.776617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.776629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.776698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.776711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.776798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.776809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.776907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.776919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.777011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.777024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.777165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.777178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.777333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.777347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.777438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.777450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.777604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.777616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.777757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.777769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.777838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.777851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.777995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.778007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.778097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.778110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.778280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.778293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.778374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.778387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.778553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.778565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.778785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.778798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.752 [2024-11-27 08:10:24.778870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.752 [2024-11-27 08:10:24.778884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.752 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.779111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.779123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.779215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.779227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.779298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.779311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.779384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.779398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.779556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.779568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.779676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.779689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.779827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.779840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.779982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.779996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.780085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.780097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.780305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.780317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.780547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.780559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.780772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.780784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.780880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.780892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.781027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.781041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.781156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.781168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.781256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.781269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.781368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.781380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.781588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.781601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.781832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.781844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.782022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.782035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.782218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.782232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.782382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.782394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.782477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.782489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.782704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.782717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.782807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.782820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.782972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.782984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.783089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.783102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.783249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.783262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.783413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.783430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.783520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.783534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.783650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.783662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.783754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.783766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.783915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.753 [2024-11-27 08:10:24.783927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.753 qpair failed and we were unable to recover it. 00:27:30.753 [2024-11-27 08:10:24.784121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.784135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.784341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.784355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.784496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.784509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.784689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.784701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.784865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.784878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.784976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.784988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.785089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.785102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.785191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.785203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.785369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.785383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.785474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.785486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.785646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.785659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.785865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.785878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.785941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.785959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.786113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.786126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.786206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.786218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.786304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.786316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.786489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.786502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.786574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.786586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.786679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.786692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.786784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.786796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.786895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.786908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.786986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.786999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.787089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.787101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.787186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.787199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.787355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.787367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.787437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.787450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.787589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.787601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.787673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.787685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.787858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.787870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.787950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.787963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.788196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.788209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.788290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.788302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.788385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.788397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.788479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.788492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.788660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.788673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.788747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.788761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.788897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.788925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.789025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.789038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.789248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.754 [2024-11-27 08:10:24.789261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.754 qpair failed and we were unable to recover it. 00:27:30.754 [2024-11-27 08:10:24.789403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.789415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.789498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.789510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.789578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.789590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.789730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.789743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.789830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.789842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.789921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.789932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.790069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.790083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.790232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.790244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.790310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.790322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.790496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.790509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.790597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.790608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.790746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.790759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.790848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.790860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.790996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.791008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.791168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.791180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.791389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.791402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.791560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.791573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.791640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.791651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.791796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.791809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.791975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.791987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.792057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.792070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:30.755 [2024-11-27 08:10:24.792270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.755 [2024-11-27 08:10:24.792283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:30.755 qpair failed and we were unable to recover it. 00:27:31.072 [2024-11-27 08:10:24.792378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.792391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.792541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.792554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.792641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.792652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.792790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.792803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.792870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.792882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.793040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.793054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.793135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.793147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.793228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.793240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.793337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.793349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.793430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.793441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.793586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.793598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.793752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.793765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.793913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.793926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.794132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.794144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.794286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.794301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.794399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.794412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.794545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.794558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.794790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.794803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.794865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.794877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.794951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.794963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.795044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.795057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.795198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.795212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.795418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.795431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.795572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.795584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.795653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.795667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.795755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.795768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.795995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.796008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.796086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.073 [2024-11-27 08:10:24.796098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.073 qpair failed and we were unable to recover it. 00:27:31.073 [2024-11-27 08:10:24.796257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.796270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.796351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.796363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.796519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.796532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.796624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.796637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.796721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.796733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.796819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.796837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.797074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.797087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.797167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.797180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.797319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.797331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.797397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.797410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.797496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.797510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.797689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.797702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.797804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.797816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.797946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.797979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.798154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.798169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.798324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.798337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.798561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.798574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.798711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.798723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.798907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.798920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.799126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.799140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.799285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.799298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.799452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.799464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.799562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.799575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.799717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.799730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.799827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.799839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.799920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.799933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.800019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.800035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.800196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.800209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.800303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.800316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.800451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.800464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.800602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.074 [2024-11-27 08:10:24.800615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.074 qpair failed and we were unable to recover it. 00:27:31.074 [2024-11-27 08:10:24.800682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.800694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.800929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.800942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.801032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.801046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.801136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.801148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.801219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.801231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.801302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.801315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.801402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.801415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.801491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.801504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.801646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.801658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.801769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.801782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.801985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.801998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.802090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.802102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.802176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.802189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.802277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.802289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.802441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.802454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.802529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.802542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.802612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.802625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.802719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.802731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.802814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.802827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.802973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.802986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.803141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.803154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.803319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.803331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.803416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.803431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.803656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.803668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.803804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.803816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.804023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.804036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.804120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.804133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.804212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.804224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.075 [2024-11-27 08:10:24.804301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.075 [2024-11-27 08:10:24.804313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.075 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.804486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.804499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.804578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.804590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.804668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.804680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.804829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.804840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.804989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.805003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.805159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.805171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.805255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.805271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.805405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.805417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.805552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.805565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.805704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.805717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.805921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.805933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.806020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.806032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.806236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.806249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.806323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.806335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.806490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.806503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.806650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.806662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.806824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.806836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.806922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.806934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.807007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.807020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.807160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.807172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.807256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.807269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.807359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.807370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.807515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.807528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.807666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.807678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.807753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.807766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.807831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.807844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.808003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.808016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.808099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.808111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.808183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.808196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.808303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.808316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.076 qpair failed and we were unable to recover it. 00:27:31.076 [2024-11-27 08:10:24.808411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.076 [2024-11-27 08:10:24.808423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.808578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.808591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.808667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.808681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.808841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.808863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.808963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.808979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.809071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.809086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.809237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.809253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.809336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.809351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.809507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.809523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.809667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.809683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.809787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.809805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.809914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.809930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.810040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.810056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.810198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.810214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.810307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.810322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.810579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.810594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.810672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.810691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.810784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.810800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.810881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.810897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.811016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.811033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.811112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.811128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.811213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.811228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.811454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.811469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.811696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.811712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.811856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.811872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.812031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.812048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.812196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.812211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.812373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.812389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.812533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.812548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.077 qpair failed and we were unable to recover it. 00:27:31.077 [2024-11-27 08:10:24.812651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.077 [2024-11-27 08:10:24.812667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.812814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.812829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.812974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.812990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.813170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.813186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.813398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.813414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.813624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.813639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.813880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.813896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.814001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.814019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.814176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.814192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.814338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.814353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.814533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.814549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.814696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.814712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.814886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.814902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.815016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.815032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.815177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.815191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.815277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.815289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.815376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.815388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.815473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.815484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.815683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.815695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.815774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.815785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.815865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.815877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.815954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.815966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.816068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.816080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.816194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.816207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.078 qpair failed and we were unable to recover it. 00:27:31.078 [2024-11-27 08:10:24.816289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.078 [2024-11-27 08:10:24.816301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.816442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.816454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.816689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.816702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.816851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.816865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.816945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.816962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.817056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.817068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.817152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.817164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.817301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.817314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.817398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.817410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.817559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.817572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.817666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.817679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.817852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.817865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.818018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.818031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.818188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.818200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.818342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.818355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.818421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.818434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.818523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.818536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.818681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.818694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.818829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.818843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.818912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.818924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.819075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.819089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.819172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.819183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.819283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.819295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.819426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.819443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.819513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.819526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.819662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.819674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.819771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.819783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.819917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.819929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.820068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.820081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.820229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.820241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.079 [2024-11-27 08:10:24.820325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.079 [2024-11-27 08:10:24.820343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.079 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.820422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.820438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.820632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.820649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.820725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.820742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.820906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.820922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.821040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.821058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.821313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.821330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.821405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.821422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.821647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.821663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.821805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.821822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.821979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.821996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.822087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.822103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.822194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.822211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.822304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.822323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.822470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.822487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.822627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.822643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.822724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.822740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.822903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.822919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.823134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.823151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.823252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.823268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.823368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.823385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.823489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.823505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.823597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.823613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.823754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.823770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.823925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.823942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.824096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.824113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.824284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.824300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.824490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.824506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.824667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.080 [2024-11-27 08:10:24.824684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.080 qpair failed and we were unable to recover it. 00:27:31.080 [2024-11-27 08:10:24.824769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.824786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.824875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.824891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.825049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.825067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.825145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.825162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.825322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.825338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.825417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.825433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.825607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.825624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.825785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.825799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.825940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.825958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.826118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.826130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.826262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.826275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.826369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.826386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.826533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.826549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.826632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.826648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.826788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.826804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.826960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.826977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.827076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.827092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.827181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.827197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.827279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.827295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.827386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.827402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.827476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.827493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.827584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.827601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.827680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.827697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.827805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.827821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.827899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.827918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.828064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.828081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.828225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.828242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.828342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.828359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.828436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.828452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.081 qpair failed and we were unable to recover it. 00:27:31.081 [2024-11-27 08:10:24.828554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.081 [2024-11-27 08:10:24.828570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.828666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.828683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.828941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.828972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.829139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.829155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.829243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.829259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.829476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.829492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.829642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.829659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.829805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.829822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.829963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.829980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.830075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.830092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.830238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.830255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.830344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.830360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.830450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.830466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.830551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.830568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.830671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.830689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.830768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.830785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.830959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.830976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.831072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.831089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.831173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.831190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.831352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.831368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.831529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.831545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.831633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.831649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.831814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.831831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.831927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.831940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.832148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.832160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.832330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.832342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.832547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.832559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.832704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.082 [2024-11-27 08:10:24.832716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.082 qpair failed and we were unable to recover it. 00:27:31.082 [2024-11-27 08:10:24.832800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.832812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.832897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.832909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.833000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.833012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.833094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.833106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.833182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.833194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.833278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.833290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.833376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.833388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.833528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.833541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.833689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.833701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.833843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.833856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.834000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.834013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.834082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.834095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.834248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.834259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.834462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.834474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.834558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.834571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.834650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.834662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.834804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.834816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.834897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.834909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.834987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.835000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.835147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.835160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.835299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.835313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.835470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.835483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.835578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.835592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.835658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.835670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.835821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.835834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.835967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.835980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.836125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.836139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.836219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.836240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.836308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.836320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.083 [2024-11-27 08:10:24.836457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.083 [2024-11-27 08:10:24.836469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.083 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.836547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.836560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.836632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.836644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.836720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.836733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.836887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.836900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.836979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.836994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.837072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.837084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.837162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.837174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.837333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.837345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.837415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.837427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.837561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.837573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.837776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.837789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.837863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.837876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.837959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.837972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.838040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.838052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.838125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.838138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.838207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.838219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.838356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.838370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.838448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.838461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.838531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.838542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.838623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.838636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.838696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.838708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.838840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.838852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.838910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.084 [2024-11-27 08:10:24.838922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.084 qpair failed and we were unable to recover it. 00:27:31.084 [2024-11-27 08:10:24.838998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.839011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.839163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.839177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.839375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.839388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.839526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.839538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.839600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.839613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.839686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.839699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.839845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.839857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.839994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.840007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.840102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.840114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.840261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.840274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.840349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.840362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.840523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.840536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.840673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.840685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.840826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.840838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.840910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.840925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.841009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.841022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.841105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.841116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.841198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.841211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.841346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.841358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.841566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.841579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.841660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.841674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.841760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.841775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.841857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.841869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.842007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.842020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.842169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.842181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.842312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.842325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.842422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.842435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.842507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.842519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.085 [2024-11-27 08:10:24.842652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.085 [2024-11-27 08:10:24.842665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.085 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.842734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.842747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.842920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.842933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.843018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.843031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.843110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.843132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.843207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.843231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.843317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.843330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.843483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.843496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.843637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.843649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.843693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:31.086 [2024-11-27 08:10:24.843724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.843736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.843800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.843810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.843883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.843894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.844053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.844066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.844155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.844167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.844243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.844255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.844458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.844470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.844559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.844572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.844645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.844657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.844808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.844821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.844902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.844914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.845072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.845085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.845224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.845236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.845334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.845348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.845428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.845441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.845593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.845605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.845752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.845764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.845833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.845845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.845976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.845989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.846053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.846064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.086 [2024-11-27 08:10:24.846199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.086 [2024-11-27 08:10:24.846211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.086 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.846288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.846300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.846375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.846387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.846532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.846544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.846717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.846729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.846872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.846885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.846959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.846972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.847108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.847122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.847180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.847192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.847346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.847359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.847444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.847457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.847523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.847535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.847613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.847625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.847694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.847707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.847844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.847857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.847940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.847965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.848029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.848042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.848188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.848202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.848282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.848295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.848431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.848444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.848525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.848538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.848684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.848698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.848829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.848842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.848995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.849009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.849211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.849224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.849307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.849320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.849458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.849471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.849613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.849626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.849712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.849726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.849802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.849814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.087 qpair failed and we were unable to recover it. 00:27:31.087 [2024-11-27 08:10:24.849984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.087 [2024-11-27 08:10:24.849997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.850149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.850162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.850360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.850374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.850517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.850530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.850663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.850676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.850772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.850785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.850929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.850942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.851017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.851030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.851174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.851186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.851257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.851270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.851356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.851378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.851464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.851476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.851578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.851591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.851736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.851748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.851816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.851828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.851918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.851931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.852070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.852083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.852219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.852231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.852377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.852391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.852598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.852612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.852698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.852711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.852794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.852807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.852875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.852888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.852970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.852983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.853111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.853126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.853333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.853347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.853436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.853449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.853533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.853548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.853650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.853663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.853752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.853765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.088 qpair failed and we were unable to recover it. 00:27:31.088 [2024-11-27 08:10:24.853852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.088 [2024-11-27 08:10:24.853866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.853955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.853967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.854110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.854123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.854192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.854205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.854286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.854299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.854436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.854450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.854513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.854525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.854661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.854675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.854743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.854756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.854962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.854978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.855123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.855137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.855275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.855288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.855358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.855371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.855447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.855460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.855638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.855652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.855717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.855729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.855900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.855913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.856129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.856142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.856209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.856221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.856297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.856310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.856387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.856401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.856477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.856489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.856628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.856640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.856718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.856731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.856872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.856884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.856968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.856981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.857050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.857062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.857145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.857157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.857225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.857237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.857314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.089 [2024-11-27 08:10:24.857326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.089 qpair failed and we were unable to recover it. 00:27:31.089 [2024-11-27 08:10:24.857409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.857422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.857504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.857516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.857589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.857600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.857664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.857676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.857737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.857750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.857956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.857971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.858131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.858144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.858298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.858312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.858401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.858414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.858552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.858565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.858704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.858716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.858809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.858821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.858956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.858969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.859046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.859059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.859140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.859153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.859288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.859302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.859391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.859404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.859540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.859553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.859637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.859649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.859756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.859769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.859839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.859851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.859933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.859950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.860185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.860199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.860335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.860348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.860416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.860429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.090 qpair failed and we were unable to recover it. 00:27:31.090 [2024-11-27 08:10:24.860510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.090 [2024-11-27 08:10:24.860522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.860622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.860634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.860779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.860793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.860878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.860890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.861023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.861037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.861187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.861199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.861401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.861414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.861481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.861493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.861645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.861660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.861794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.861806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.862008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.862020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.862170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.862183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.862266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.862279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.862362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.862375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.862524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.862536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.862606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.862625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.862693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.862706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.862933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.862946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.863023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.863035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.863130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.863142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.863270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.863283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.863374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.863387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.863455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.863469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.863621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.863633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.863712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.863726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.863820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.863832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.863998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.864011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.864154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.864166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.864233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.864246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.091 [2024-11-27 08:10:24.864401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.091 [2024-11-27 08:10:24.864413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.091 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.864617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.864630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.864859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.864872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.865008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.865021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.865156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.865168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.865254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.865268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.865365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.865377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.865517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.865530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.865696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.865708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.865874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.865887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.866034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.866047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.866136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.866149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.866216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.866229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.866427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.866440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.866589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.866602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.866838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.866851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.866982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.866995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.867135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.867148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.867349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.867362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.867508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.867520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.867670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.867684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.867853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.867865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.868011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.868024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.868168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.868181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.868272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.868284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.868417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.868430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.868567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.868580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.868670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.868682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.868753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.868770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.868984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.868998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.092 qpair failed and we were unable to recover it. 00:27:31.092 [2024-11-27 08:10:24.869143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.092 [2024-11-27 08:10:24.869156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.869292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.869304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.869391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.869404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.869490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.869504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.869658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.869671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.869841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.869853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.870100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.870113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.870269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.870282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.870411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.870424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.870503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.870515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.870667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.870681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.870838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.870850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.871023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.871036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.871118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.871132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.871285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.871297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.871394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.871407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.871482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.871496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.871663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.871676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.871742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.871755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.871892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.871904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.872077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.872090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.872235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.872248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.872323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.872335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.872413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.872425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.872561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.872573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.872666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.872679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.872762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.872775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.872913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.872926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.873005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.873019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.093 [2024-11-27 08:10:24.873162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-27 08:10:24.873175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.093 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.873256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.873270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.873348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.873361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.873431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.873444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.873506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.873520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.873601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.873614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.873676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.873688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.873770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.873782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.873850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.873863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.873944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.873978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.874043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.874056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.874200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.874212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.874275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.874287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.874351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.874364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.874426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.874441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.874594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.874608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.874680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.874693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.874845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.874857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.874943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.874961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.875105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.875118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.875187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.875199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.875334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.875347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.875499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.875512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.875644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.875657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.875747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.875760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.875931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.875944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.876041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.876053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.876173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.876214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.876415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.876453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.876567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-27 08:10:24.876597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.094 qpair failed and we were unable to recover it. 00:27:31.094 [2024-11-27 08:10:24.876755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.876772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.876922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.876938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.877037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.877054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.877129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.877146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.877300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.877316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.877407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.877424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.877658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.877674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.877767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.877784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.877859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.877875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.878158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.878175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.878260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.878277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.878433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.878452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.878547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.878564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.878709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.878724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.878802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.878817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.878959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.878979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.879061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.879077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.879219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.879235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.879391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.879408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.879500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.879516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.879676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.879692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.879771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.879787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.879873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.879889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.880096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.880114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.880284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.880305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.880468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.880484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.880573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.880590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.880686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.880702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.880862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.880878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.881097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.095 [2024-11-27 08:10:24.881114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.095 qpair failed and we were unable to recover it. 00:27:31.095 [2024-11-27 08:10:24.881193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.881210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.881368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.881384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.881528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.881544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.881631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.881648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.881819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.881835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.881936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.881959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.882121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.882138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.882226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.882242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.882489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.882506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.882603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.882620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.882763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.882780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.882936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.882958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.883048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.883064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.883203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.883220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.883300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.883317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.883470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.883486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.883654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.883670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.883769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.883786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.883886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.883902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.883985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.884003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.884147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.884164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.884368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.884387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.884538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.884556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.884713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.884730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.096 [2024-11-27 08:10:24.884857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.096 [2024-11-27 08:10:24.884873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.096 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.884969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.884987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.885179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.885196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.885346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.097 [2024-11-27 08:10:24.885353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.885370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.097 [2024-11-27 08:10:24.885372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.097 [2024-11-27 08:10:24.885379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.885386] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.097 [2024-11-27 08:10:24.885392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.097 [2024-11-27 08:10:24.885468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.885484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.885641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.885657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.885936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.885970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.886109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.886123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.886206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.886218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.886286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.886299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.886441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.886454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.886546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.886560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.886713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.886726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.886803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.886815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.886890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.886903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.887043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.887057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.887144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.887157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.887071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:31.097 [2024-11-27 08:10:24.887178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:31.097 [2024-11-27 08:10:24.887284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:31.097 [2024-11-27 08:10:24.887368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.887381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.887285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:31.097 [2024-11-27 08:10:24.887454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.887465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.887597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.887610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.887813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.887826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.887969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.887982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.888130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.888144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.888340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.888353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.888430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.888443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.888588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.888602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.888704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.097 [2024-11-27 08:10:24.888717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.097 qpair failed and we were unable to recover it. 00:27:31.097 [2024-11-27 08:10:24.888863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.888877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.888957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.888970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.889053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.889065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.889142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.889161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.889242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.889253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.889335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.889348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.889480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.889492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.889638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.889652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.889862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.889875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.889963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.889976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.890126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.890138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.890228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.890240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.890431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.890445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.890515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.890528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.890618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.890630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.890706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.890719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.890799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.890810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.890954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.890968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.891111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.891123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.891326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.891340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.891484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.891500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.891594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.891606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.891746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.891759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.891842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.891854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.892081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.892095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.892198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.892212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.892298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.892310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.892413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.892427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.892581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.892594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.892700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.892713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.892866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.098 [2024-11-27 08:10:24.892880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.098 qpair failed and we were unable to recover it. 00:27:31.098 [2024-11-27 08:10:24.893009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.893023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.893096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.893107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.893194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.893207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.893349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.893362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.893588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.893606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.893711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.893724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.893801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.893813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.893901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.893914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.894056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.894069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.894221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.894235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.894319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.894332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.894402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.894415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.894562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.894575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.894649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.894662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.894812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.894824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.894901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.894914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.895011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.895024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.895117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.895129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.895247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.895261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.895345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.895358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.895447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.895462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.895605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.895618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.895686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.895700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.895763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.895775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.895913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.895925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.896091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.896105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.896171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.896184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.896247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.896259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.896463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.896477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.896620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.896635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.896777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.099 [2024-11-27 08:10:24.896789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.099 qpair failed and we were unable to recover it. 00:27:31.099 [2024-11-27 08:10:24.896923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.896936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.897011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.897025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.897179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.897192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.897272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.897285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.897446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.897459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.897563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.897576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.897662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.897675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.897776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.897789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.897925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.897953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.898023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.898036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.898124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.898136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.898274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.898287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.898386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.898398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.898480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.898492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.898573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.898586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.898655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.898667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.898811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.898824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.898965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.898978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.899052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.899064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.899204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.899217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.899354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.899367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.899457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.899469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.899564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.899578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.899650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.899662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.899801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.899815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.899880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.899892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.900023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.900035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.900124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.900138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.100 qpair failed and we were unable to recover it. 00:27:31.100 [2024-11-27 08:10:24.900288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.100 [2024-11-27 08:10:24.900302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.900450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.900463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.900600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.900614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.900765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.900778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.900933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.900951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.901037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.901049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.901138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.901150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.901232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.901244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.901336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.901348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.901482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.901497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.901566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.901581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.901654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.901666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.901751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.901764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.901847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.901860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.901930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.901943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.902033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.902046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.902114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.902125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.902317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.902330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.902406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.902417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.902497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.902510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.902593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.902605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.902757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.902771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.902848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.902860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.903104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.903118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.903202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.903214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.903377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.903392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.903467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.903480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.903566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.903579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.903746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.903759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.903977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.903992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.904084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.101 [2024-11-27 08:10:24.904098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.101 qpair failed and we were unable to recover it. 00:27:31.101 [2024-11-27 08:10:24.904173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.904185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.904338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.904353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.904421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.904434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.904568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.904582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.904646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.904660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.904796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.904809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.904894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.904906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.905042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.905054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.905124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.905137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.905373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.905386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.905532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.905544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.905625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.905638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.905697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.905711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.905790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.905803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.905942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.905961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.906105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.906120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.906291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.906304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.906393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.906407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.906470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.906482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.906566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.906583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.906750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.906763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.906902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.906916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.907047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.907061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.907146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.907159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.907284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.907297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.907467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.907481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.907578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.907590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.102 [2024-11-27 08:10:24.907790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.102 [2024-11-27 08:10:24.907804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.102 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.908005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.908018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.908219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.908233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.908323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.908335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.908418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.908431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.908580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.908592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.908691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.908703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.908776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.908788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.908936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.908954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.909039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.909051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.909120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.909133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.909196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.909209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.909412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.909424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.909629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.909642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.909713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.909726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.909805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.909818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.909915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.909927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.910017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.910030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.910126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.910138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.910331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.910343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.910416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.910427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.910520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.910532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.910736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.910750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.910836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.910848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.910994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.911008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.911092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.911104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.911266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.911280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.911500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.911513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.911687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.103 [2024-11-27 08:10:24.911701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.103 qpair failed and we were unable to recover it. 00:27:31.103 [2024-11-27 08:10:24.911895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.911908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.912058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.912071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.912279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.912292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.912520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.912538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.912740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.912753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.912968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.912981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.913225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.913238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.913448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.913462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.913671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.913683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.913838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.913851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.913983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.913995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.914171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.914184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.914411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.914424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.914499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.914512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.914598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.914610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.914695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.914708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.914861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.914874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.914958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.914971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.915187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.915199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.915293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.915305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.915449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.915461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.915531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.915544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.915710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.915722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.915886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.915898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.916070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.916083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.916340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.916354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.916493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.916505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.916652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.916666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.916869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.916882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.916979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.916992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.917161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.917174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.917268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.917281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.917468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.917481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.917629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.917643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.917725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.917737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.917963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.917977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.918058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.104 [2024-11-27 08:10:24.918070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.104 qpair failed and we were unable to recover it. 00:27:31.104 [2024-11-27 08:10:24.918213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.918225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.918325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.918337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.918430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.918442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.918541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.918553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.918754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.918767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.918895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.918909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.919083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.919100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.919253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.919265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.919486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.919498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.919648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.919660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.919809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.919822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.919904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.919917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.920066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.920078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.920304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.920316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.920514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.920526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.920657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.920670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.920819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.920831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.921002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.921016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.921154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.921165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.921390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.921403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.921476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.921488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.921577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.921589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.921802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.921815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.921967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.921980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.922129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.922142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.922343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.922355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.922498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.922509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.922657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.922669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.922766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.922779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.922914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.922927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.923043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.923058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.923280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.923292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.923383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.923394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.923639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.923653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.923741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.923752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.923998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.924010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.924260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.924272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.105 [2024-11-27 08:10:24.924502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.105 [2024-11-27 08:10:24.924514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.105 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.924604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.924616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.924762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.924774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.924975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.924988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.925208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.925220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.925385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.925397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.925603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.925616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.925861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.925873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.926042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.926054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.926198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.926212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.926286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.926297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.926381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.926394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.926534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.926546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.926774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.926786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.926939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.926955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.927160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.927172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.927392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.927405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.927595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.927607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.927751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.927764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.928019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.928033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.928259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.928272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.928365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.928376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.928526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.928538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.928712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.928724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.928926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.928939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.929101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.929113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.929259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.929273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.929369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.929380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.929547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.929560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.929728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.929740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.929889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.929902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.930048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.930063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.930210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.106 [2024-11-27 08:10:24.930223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.106 qpair failed and we were unable to recover it. 00:27:31.106 [2024-11-27 08:10:24.930390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.930402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.930653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.930666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.930912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.930925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.931097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.931111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.931336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.931350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.931576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.931590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.931771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.931784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.931985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.931998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.932148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.932162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.932369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.932382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.932532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.932547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.932679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.932692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.932827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.932840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.932934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.932952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.933041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.933053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.933197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.933210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.933363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.933378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.933451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.933463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.933613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.933626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.933834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.933847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.934103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.934117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.934321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.934334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.934546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.934558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.934702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.934714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.934792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.934804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.934886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.934898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.935034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.935047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.935222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.935235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.935381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.935395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.935528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.935542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.935742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.935754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.936004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.936017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.936188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.107 [2024-11-27 08:10:24.936201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.107 qpair failed and we were unable to recover it. 00:27:31.107 [2024-11-27 08:10:24.936349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.936361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.936584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.936597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.936685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.936697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.936900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.936912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.937090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.937103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.937186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.937199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.937345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.937358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.937587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.937600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.937689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.937701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.937849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.937862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.937935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.937954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.938109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.938121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.938258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.938271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.938420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.938432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.938614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.938626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.938757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.938771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.938994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.939008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.939217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.939230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.939386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.939399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.939644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.939656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.939859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.939872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.940122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.940135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.940352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.940364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.940602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.940615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.940880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.940893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.941114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.941127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.941296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.941308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.941537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.941549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.941699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.941712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.941950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.941962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.942168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.942181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.942382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.942395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.942619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.942631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.942917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.942929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.943158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.943170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.943422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.943434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.943573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.943585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.943740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.108 [2024-11-27 08:10:24.943752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.108 qpair failed and we were unable to recover it. 00:27:31.108 [2024-11-27 08:10:24.943932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.943944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.944149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.944162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.944313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.944325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.944529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.944542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.944741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.944753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.944836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.944848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.944939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.944955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.945163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.945175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.945377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.945390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.945592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.945604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.945776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.945788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.945962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.945975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.946131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.946146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.946344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.946357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.946557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.946569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.946700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.946712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.946966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.946979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.947062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.947075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.947326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.947338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.947590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.947602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.947841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.947853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.948031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.948044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.948186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.948198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.948466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.948479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.948715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.948727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.948933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.948945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.949158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.949171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.949370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.949383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.949456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.949467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.949622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.949634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.949832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.949844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.950075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.950088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.950313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.950325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.109 [2024-11-27 08:10:24.950481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.109 [2024-11-27 08:10:24.950493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.109 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.950706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.950718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.950943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.950959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.951217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.951229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.951409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.951421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.951564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.951576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.951754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.951766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.951918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.951930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.952087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.952099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.952274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.952287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.952487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.952500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.952713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.952725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.952953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.952966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.953165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.953178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.953406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.953419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.953561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.953574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.953821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.953835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.954039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.954052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.954226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.954239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.954452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.954466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.954603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.954614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.954698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.954711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.954939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.954965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.955173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.955185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.955395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.955407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.955607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.955619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.955705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.955717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.955856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.955868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.956037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.956050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.956281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.956294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.956545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.956557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.956773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.956786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.956955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.956967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.957203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.957215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.957352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.957364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.957508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.957521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.957733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.957745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.957880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.957891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.110 [2024-11-27 08:10:24.958129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.110 [2024-11-27 08:10:24.958141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.110 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.958366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.958378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.958627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.958639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.958790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.958803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.958958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.958970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.959171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.959184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.959414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.959427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.959626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.959639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.959777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.959788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.960001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.960013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.960187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.960199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.960384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.960395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.960533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.960544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.960629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.960641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.960864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.960876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.960975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.960987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.961050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.961062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.961146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.961158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.961306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.961318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.961548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.961560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.961809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.961822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.962051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.962066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.962299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.962311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.962466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.962479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.962702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.962714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.962861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.962873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.963073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.963085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.963311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.963323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.963572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.963584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.963670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.963682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.963860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.963872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.964004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.964016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.964119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.964131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.964366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.964378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.964537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.964549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.964762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.964774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.964954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.964966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.965121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.965133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.111 qpair failed and we were unable to recover it. 00:27:31.111 [2024-11-27 08:10:24.965233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.111 [2024-11-27 08:10:24.965247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.965345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.965357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.965524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.965536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.965604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.965616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.965832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.965845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.966045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.966057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.966152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.966164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.966242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.966254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.966455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.966466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.966599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.966611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.966840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.966852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.967071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.967084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.967233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.967245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.967311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.967324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.967471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.967484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.967697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.967710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.967857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.967869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.968070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.968083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.968237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.968249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.968451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.968463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.968698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.968710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.968911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.968923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.969074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.969086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.969246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.969260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.969486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.969499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.969688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.969699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.969926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.969938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.970094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.970105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.970318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.970331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.970577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.970590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.970759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.970771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.970919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.970931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.971079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.971091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.971242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.971255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.112 qpair failed and we were unable to recover it. 00:27:31.112 [2024-11-27 08:10:24.971394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.112 [2024-11-27 08:10:24.971406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.971552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.971564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.971830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.971842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.972000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.972012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.972183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.972195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.972378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.972390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.972533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.972546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.972792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.972805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.973035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.973047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.973276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.973289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.973439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.973450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.973669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.973682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.973827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.973839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.974046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.974059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.974295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.974307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.974578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.974591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.974744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.974756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.974981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.974994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.975073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.975084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.975294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.975306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.975506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.975518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.975670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.975682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.975875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.975887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.976138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.976151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.976329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.976341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.976583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.976595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.976793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.976804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.976886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.976899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.977068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.977080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.977217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.977230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.977377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.977389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.977474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.977486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.977646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.977658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.977882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.977894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.978026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.978038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.978303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.978316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.978383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.978395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.978651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.113 [2024-11-27 08:10:24.978664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.113 qpair failed and we were unable to recover it. 00:27:31.113 [2024-11-27 08:10:24.978919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.978931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.979078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.979090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.979334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.979346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.979434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.979446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.979623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.979635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.979867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.979878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.980027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.980040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.980195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.980206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.980404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.980416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.980616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.980628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.980826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.980838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.981060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.981073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.981273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.981285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.981577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.981589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.981816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.981828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.981980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.981993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.982207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.982219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.982444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.982456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.982646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.982659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.982862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.982873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.983099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.983111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.983347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.983359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.983528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.983540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.983746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.983758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.983976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.983989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.984216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.984229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.984375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.984387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.984633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.984645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.984871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.984883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.985133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.985146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.985394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.985406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.985630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.985644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.985810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.985822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.986044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.986057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.986194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.986206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.986341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.986353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.986512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.986524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.986674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.986686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.986911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.986923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.987078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.114 [2024-11-27 08:10:24.987091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.114 qpair failed and we were unable to recover it. 00:27:31.114 [2024-11-27 08:10:24.987305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.987317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.987487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.987499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.987708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.987720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.987918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.987930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.988092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.988104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.988337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.988349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.988492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.988504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.988742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.988753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.988953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.988966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.989139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.989151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.989374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.989385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.989533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.989545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.989679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.989691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.989895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.989907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.990176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.990189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.990361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.990373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.990511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.990522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.990668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.990681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.990835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.990848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.991019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.991031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.991283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.991295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.991524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.991536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.991696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.991707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.991940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.991960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.992121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.992133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.992320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.992333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.992496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.992508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.992643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.992654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:31.115 [2024-11-27 08:10:24.992876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.992891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.993092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.993106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:31.115 [2024-11-27 08:10:24.993243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.993257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.993417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.993429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.993519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:31.115 [2024-11-27 08:10:24.993532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.993761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.993774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:31.115 [2024-11-27 08:10:24.993940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.993958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.994077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.994091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 08:10:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.115 [2024-11-27 08:10:24.994225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.994238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.994392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.994404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.994654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.115 [2024-11-27 08:10:24.994666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.115 qpair failed and we were unable to recover it. 00:27:31.115 [2024-11-27 08:10:24.994890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.994902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.995126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.995138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.995312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.995324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.995420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.995433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.995573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.995586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.995748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.995760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.995936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.995953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.996044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.996056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.996213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.996226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.996313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.996325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.996477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.996490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.996642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.996654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.996793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.996806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.996976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.996989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.997190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.997202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.997426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.997439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.997605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.997618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.997845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.997858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.998043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.998057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.998285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.998298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.998381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.998393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.998584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.998595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.998679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.998691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.998899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.998911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.999136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.999149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.999301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.999314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.999466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.999479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.999702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.999715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.999890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:24.999902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:24.999993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:25.000006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:25.000165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:25.000180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:25.000285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:25.000298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:25.000442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:25.000454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:25.000600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:25.000612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:25.000780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:25.000793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:25.000931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:25.000944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.116 [2024-11-27 08:10:25.001127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.116 [2024-11-27 08:10:25.001140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.116 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.001235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.001248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.001336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.001349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.001435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.001448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.001526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.001539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.001706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.001718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.001868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.001881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.002015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.002028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.002232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.002244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.002400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.002412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.002599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.002612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.002810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.002823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.002974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.002987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.003159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.003172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.003261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.003273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.003357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.003369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.003446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.003458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.003559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.003570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.003791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.003804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.003972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.003985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.004185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.004198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.004443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.004455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.004544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.004556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.004754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.004766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.004917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.004929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.005156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.005170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.005311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.005325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.005468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.005482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.005725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.005739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.005944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.005961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.006055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.006068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.006198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.006211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.006280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.006292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.006434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.006447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.006532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.006547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.006698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.006711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.006845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.006857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.007059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.007072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.007159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.007172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.007322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.007334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.007412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.007424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.007555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.007568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.007707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.007720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.117 [2024-11-27 08:10:25.007811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.117 [2024-11-27 08:10:25.007825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.117 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.008019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.008032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.008113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.008126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.008282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.008294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.008365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.008377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.008576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.008588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.008816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.008828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.009007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.009019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.009112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.009124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.009326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.009338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.009481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.009493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.009718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.009730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.009936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.009952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.010178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.010191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.010330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.010342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.010489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.010501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.010671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.010683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.010922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.010935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.011136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.011165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.011405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.011422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.011540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.011556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.011667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.011683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.011884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.011900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.012150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.012168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.012381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.012399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.012564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.012582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.012678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.012695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.012883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.012900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.013067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.013085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.013188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.013204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.013382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.013397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.013600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.013615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.013750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.013762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.013938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.013955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.014109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.014122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.014271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.014284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.014360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.014373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.014456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.014468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.014721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.014733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.014889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.014901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.015075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.015088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.015241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.015253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.015424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.015438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.015596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.015608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.118 qpair failed and we were unable to recover it. 00:27:31.118 [2024-11-27 08:10:25.015751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.118 [2024-11-27 08:10:25.015763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.015864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.015876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.015968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.015980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.016074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.016086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.016287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.016299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.016443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.016455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.016551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.016563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.016648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.016660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.016755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.016768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.016936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.016953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.017094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.017106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.017238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.017251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.017355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.017367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.017512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.017526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.017623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.017642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.017880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.017897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.018145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.018162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.018325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.018341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.018493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.018510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.018679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.018696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.018839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.018857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.019071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.019088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.019301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.019318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.019429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.019445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.019702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.019719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.019929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.019950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.020118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.020136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.020323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.020345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.020486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.020503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.020643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.020659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.020815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.020832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.020989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.021007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.021219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.021236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.021407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.021424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.021629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.021645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.021878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.021895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.021996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.022013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.022160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.022176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.022290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.022306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.022537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.022554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.022775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.022791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.022913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.022930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.023131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.023149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.023252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.023269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.119 [2024-11-27 08:10:25.023421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.119 [2024-11-27 08:10:25.023438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.119 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.023523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.023540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.023712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.023728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.023832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.023848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.024087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.024106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.024272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.024288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.024402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.024418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.024680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.024697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.024796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.024812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.024979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.024996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.025086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.025100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.025211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.025225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.025430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.025442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.025538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.025550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.025721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.025734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.025864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.025877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.026033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.026045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.026127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.026139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.026294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.026307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.026391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.026404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.026625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.026639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.026825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.026837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.026994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.027007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.027144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.027159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.027313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.027325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.027458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.027471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.027749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.027762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.027957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.027969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.028068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.028080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.028186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.028199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.028277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.028289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.028490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.028502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.028677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.028689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.120 [2024-11-27 08:10:25.028900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.028914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.029116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.029129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.029278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:31.120 [2024-11-27 08:10:25.029291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.029451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.029463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.120 [2024-11-27 08:10:25.029622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.029637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 [2024-11-27 08:10:25.029803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.120 [2024-11-27 08:10:25.029816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.120 qpair failed and we were unable to recover it. 00:27:31.120 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.121 [2024-11-27 08:10:25.030029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.030044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.030190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.030203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.030303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.030315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.030404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.030416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.030506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.030518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.030661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.030674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.030873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.030885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.031022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.031035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.031266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.031279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.031372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.031383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.031470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.031482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.031631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.031643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.031889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.031902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.031998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.032011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.032169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.032182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.032269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.032282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.032349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.032361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.032459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.032472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.032631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.032643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.032800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.032812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.033019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.033033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.033181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.033192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.033347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.033362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.033611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.033623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.033846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.033858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.034097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.034110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.034177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.034190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.034334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.034346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.034444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.034457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.034588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.034601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.034809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.034822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.034973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.034987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.035100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.035113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.035197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.035209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.035374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.035387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.035538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.035550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.035704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.035717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.035851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.035864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.036092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.036107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.036195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.036208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.036365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.036380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.036487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.036500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.036745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.036757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.121 [2024-11-27 08:10:25.036900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.121 [2024-11-27 08:10:25.036912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.121 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.037047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.037060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.037229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.037241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.037455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.037467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.037746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.037758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.037901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.037912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.038110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.038140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.038358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.038375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.038539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.038556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.038806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.038822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.038981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.038998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.039151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.039167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.039317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.039333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.039494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.039510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.039784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.039800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.040035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.040053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.040279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.040295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.040458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.040474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.040672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.040688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.040870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.040885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.041125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.041142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.041300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.041317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.041418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.041435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.041565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.041582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.041757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.041773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.042009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.042026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.042170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.042186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.042377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.042393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.042581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.042597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.042758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.042774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.042928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.042944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.043100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.043116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.043269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.043285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.043452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.043468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.043644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.043660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.043896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.043912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.044136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.044153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.044312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.044329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.044590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.044606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.044891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.044906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.045095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.045112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.045369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.045385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.045486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.045502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.045734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.045750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.045966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.045983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.046145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.046161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.122 [2024-11-27 08:10:25.046308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.122 [2024-11-27 08:10:25.046327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.122 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.046468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.046485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.046694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.046709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.046955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.046971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.047133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.047149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.047311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.047327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.047586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.047602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.047815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.047831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.047975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.047992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.048088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.048104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.048289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.048306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.048473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.048489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.048741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.048757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.048988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.049005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.049247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.049264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.049368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.049384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.049604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.049621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.049876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.049892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.050106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.050123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.050354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.050371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.050535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.050552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.050785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.050803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.050951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.050968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.051188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.051205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.051439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.051455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.051616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.051632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.051784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.051800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.051907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.051923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.052153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.052170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.052333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.052351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.052581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.052597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.052770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.052787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.052886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.052903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.053115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.053132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.053238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.053255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.053418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.053435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.053701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.053718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.053872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.053888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.054100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.054117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.054223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.054240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.054418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.054437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.054599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.054616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.054768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.054784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.054951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.054967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.055130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.055146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.055250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.055266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.055477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.055493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.055706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.123 [2024-11-27 08:10:25.055723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.123 qpair failed and we were unable to recover it. 00:27:31.123 [2024-11-27 08:10:25.055900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.055917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.056049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.056066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.056279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.056295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.056511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.056527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.056760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.056776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.056990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.057007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.057179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.057195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.057425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.057441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.057545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.057562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.057716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.057732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.057877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.057893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.058130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.058147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.058359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.058375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.058527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.058543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.058641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.058658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 Malloc0 00:27:31.124 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.124 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:31.124 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.124 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.124 [2024-11-27 08:10:25.059908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.059940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.060249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.060267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.060523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.060541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.060756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.060773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.061038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.061056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.061324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.061341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.061516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.061532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.061703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.061719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.061896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.061913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.062074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.062091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.062329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.062345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.062531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.062547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.062754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.124 [2024-11-27 08:10:25.062779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.062794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.062946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.062966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.063148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.124 [2024-11-27 08:10:25.063164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39d0000b90 with addr=10.0.0.2, port=4420 00:27:31.124 qpair failed and we were unable to recover it. 00:27:31.124 [2024-11-27 08:10:25.063374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.063413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.063714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.063747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa19be0 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.064002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.064032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.064273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.064286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.064486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.064498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.064716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.064727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.064927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.064939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.065113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.065126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.065294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.065307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.065455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.065467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.065618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.065630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.065807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.065819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.065969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.065982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.066233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.066245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.066405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.066417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.066642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.066654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.066814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.066825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.067033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.067046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.067201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.067213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.067412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.067423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.067584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.067596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.067662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.067674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.067834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.067846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.067999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.068011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.068176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.068188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.068390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.068402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.068537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.068549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.068647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.068659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.068834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.068846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.069046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.069058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.069267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.069280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.069429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.069441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.069646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.069657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.069793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.069805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.070037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.070049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.070275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.070287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.070513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.070525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.125 [2024-11-27 08:10:25.070727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.125 [2024-11-27 08:10:25.070738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.125 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.070872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.070883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.126 [2024-11-27 08:10:25.071103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.071115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:31.126 [2024-11-27 08:10:25.071262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.071275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.126 [2024-11-27 08:10:25.071438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.126 [2024-11-27 08:10:25.071451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.071533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.071545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.071621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.071632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.071856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.071868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.071966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.071979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.072201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.072213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.072370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.072382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.072589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.072601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.072847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.072859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.072926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.072937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.073112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.073124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.073328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.073339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.073477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.073489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.073734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.073746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.073904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.073915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.074117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.074129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.074355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.074367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.074542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.074554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.074753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.074765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.074983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.074996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.075218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.075230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.075456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.075468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.075614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.075625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.075712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.075724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.075805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.075817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.076046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.076059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.076308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.076321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.076550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.076563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.076702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.076714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.076859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.076870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.077122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.077134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.077278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.077290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.077521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.126 [2024-11-27 08:10:25.077533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.126 qpair failed and we were unable to recover it. 00:27:31.126 [2024-11-27 08:10:25.077683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.077695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.077838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.077850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.077950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.077962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.078186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.078199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.078279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.078293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.078454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.078465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.078682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.078694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.078929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.078941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.127 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:31.127 [2024-11-27 08:10:25.079168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.079181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.127 [2024-11-27 08:10:25.079337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.079350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.127 [2024-11-27 08:10:25.079504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.079516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.079711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.079723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.079883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.079895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.080096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.080109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.080334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.080346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.080567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.080579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.080781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.080793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.080926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.080938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.081041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.081053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.081277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.081289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.081381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.081393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.081557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.081570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.081728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.081740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.081968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.081980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.082061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.082074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.082275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.082286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.082534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.082545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.082707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.082719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.082953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.082966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.083197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.083209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.083379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.083391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.083534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.083546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.083679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.083692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.083883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.083895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.084038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.084051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.084193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.127 [2024-11-27 08:10:25.084205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.127 qpair failed and we were unable to recover it. 00:27:31.127 [2024-11-27 08:10:25.084357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.084370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.084509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.084521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.084719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.084731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.084944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.084960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.085046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.085058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.085192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.085204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.085347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.085359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.085505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.085517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.085749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.085761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.085965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.085977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.086157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.086169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.086313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.086326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.086405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.086417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.086571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.086582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.086734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.086746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.086900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.086912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.128 [2024-11-27 08:10:25.087065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.087079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.087146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.087159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.128 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.128 [2024-11-27 08:10:25.087390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.087405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.128 [2024-11-27 08:10:25.087611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.087624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.087785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.087797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.087884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.087897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.088052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.088065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.088265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.088277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.088536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.088548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.088761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.088773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.089024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.089036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.089188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.089200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.089410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.089422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.089642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.089654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.089888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.089900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.090001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.090016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.128 qpair failed and we were unable to recover it. 00:27:31.128 [2024-11-27 08:10:25.090261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.128 [2024-11-27 08:10:25.090274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.129 qpair failed and we were unable to recover it. 00:27:31.129 [2024-11-27 08:10:25.090429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.129 [2024-11-27 08:10:25.090441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.129 qpair failed and we were unable to recover it. 00:27:31.129 [2024-11-27 08:10:25.090618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.129 [2024-11-27 08:10:25.090630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.129 qpair failed and we were unable to recover it. 00:27:31.129 [2024-11-27 08:10:25.090838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.129 [2024-11-27 08:10:25.090850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c8000b90 with addr=10.0.0.2, port=4420 00:27:31.129 qpair failed and we were unable to recover it. 00:27:31.129 [2024-11-27 08:10:25.091018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.129 [2024-11-27 08:10:25.093429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.129 [2024-11-27 08:10:25.093506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.129 [2024-11-27 08:10:25.093524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.129 [2024-11-27 08:10:25.093533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.129 [2024-11-27 08:10:25.093541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.129 [2024-11-27 08:10:25.093564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.129 qpair failed and we were unable to recover it. 00:27:31.129 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.129 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:31.129 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.129 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.129 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.129 08:10:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2607622 00:27:31.129 [2024-11-27 08:10:25.103356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.129 [2024-11-27 08:10:25.103421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.129 [2024-11-27 08:10:25.103437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.129 [2024-11-27 08:10:25.103445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.129 [2024-11-27 08:10:25.103452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.129 [2024-11-27 08:10:25.103473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.129 qpair failed and we were unable to recover it. 00:27:31.129 [2024-11-27 08:10:25.113362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.129 [2024-11-27 08:10:25.113453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.129 [2024-11-27 08:10:25.113468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.129 [2024-11-27 08:10:25.113475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.129 [2024-11-27 08:10:25.113481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.129 [2024-11-27 08:10:25.113497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.129 qpair failed and we were unable to recover it. 00:27:31.390 [2024-11-27 08:10:25.123355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.390 [2024-11-27 08:10:25.123426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.390 [2024-11-27 08:10:25.123441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.390 [2024-11-27 08:10:25.123449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.390 [2024-11-27 08:10:25.123455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.390 [2024-11-27 08:10:25.123471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.390 qpair failed and we were unable to recover it. 00:27:31.390 [2024-11-27 08:10:25.133331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.390 [2024-11-27 08:10:25.133391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.390 [2024-11-27 08:10:25.133406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.390 [2024-11-27 08:10:25.133413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.390 [2024-11-27 08:10:25.133419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.390 [2024-11-27 08:10:25.133435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.390 qpair failed and we were unable to recover it. 00:27:31.390 [2024-11-27 08:10:25.143383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.390 [2024-11-27 08:10:25.143453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.390 [2024-11-27 08:10:25.143468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.390 [2024-11-27 08:10:25.143474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.390 [2024-11-27 08:10:25.143481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.390 [2024-11-27 08:10:25.143497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.390 qpair failed and we were unable to recover it. 00:27:31.390 [2024-11-27 08:10:25.153365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.390 [2024-11-27 08:10:25.153427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.390 [2024-11-27 08:10:25.153442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.390 [2024-11-27 08:10:25.153450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.390 [2024-11-27 08:10:25.153457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.390 [2024-11-27 08:10:25.153471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.390 qpair failed and we were unable to recover it. 00:27:31.390 [2024-11-27 08:10:25.163417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.390 [2024-11-27 08:10:25.163493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.390 [2024-11-27 08:10:25.163508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.390 [2024-11-27 08:10:25.163515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.390 [2024-11-27 08:10:25.163522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.390 [2024-11-27 08:10:25.163537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.390 qpair failed and we were unable to recover it. 00:27:31.390 [2024-11-27 08:10:25.173469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.390 [2024-11-27 08:10:25.173526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.390 [2024-11-27 08:10:25.173540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.390 [2024-11-27 08:10:25.173546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.390 [2024-11-27 08:10:25.173554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.390 [2024-11-27 08:10:25.173569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.390 qpair failed and we were unable to recover it. 00:27:31.390 [2024-11-27 08:10:25.183478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.390 [2024-11-27 08:10:25.183540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.390 [2024-11-27 08:10:25.183554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.390 [2024-11-27 08:10:25.183562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.390 [2024-11-27 08:10:25.183568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.391 [2024-11-27 08:10:25.183584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.391 qpair failed and we were unable to recover it. 00:27:31.391 [2024-11-27 08:10:25.193501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.391 [2024-11-27 08:10:25.193561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.391 [2024-11-27 08:10:25.193576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.391 [2024-11-27 08:10:25.193587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.391 [2024-11-27 08:10:25.193594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.391 [2024-11-27 08:10:25.193610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.391 qpair failed and we were unable to recover it. 00:27:31.391 [2024-11-27 08:10:25.203538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.391 [2024-11-27 08:10:25.203597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.391 [2024-11-27 08:10:25.203611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.391 [2024-11-27 08:10:25.203617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.391 [2024-11-27 08:10:25.203625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.391 [2024-11-27 08:10:25.203640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.391 qpair failed and we were unable to recover it. 00:27:31.391 [2024-11-27 08:10:25.213560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.391 [2024-11-27 08:10:25.213619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.391 [2024-11-27 08:10:25.213633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.391 [2024-11-27 08:10:25.213640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.391 [2024-11-27 08:10:25.213647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.391 [2024-11-27 08:10:25.213662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.391 qpair failed and we were unable to recover it. 00:27:31.391 [2024-11-27 08:10:25.223568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.391 [2024-11-27 08:10:25.223623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.391 [2024-11-27 08:10:25.223637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.391 [2024-11-27 08:10:25.223645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.391 [2024-11-27 08:10:25.223652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.391 [2024-11-27 08:10:25.223667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.391 qpair failed and we were unable to recover it. 00:27:31.391 [2024-11-27 08:10:25.233601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.391 [2024-11-27 08:10:25.233677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.391 [2024-11-27 08:10:25.233691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.391 [2024-11-27 08:10:25.233698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.391 [2024-11-27 08:10:25.233705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.391 [2024-11-27 08:10:25.233723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.391 qpair failed and we were unable to recover it. 00:27:31.391 [2024-11-27 08:10:25.243672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.391 [2024-11-27 08:10:25.243752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.391 [2024-11-27 08:10:25.243765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.391 [2024-11-27 08:10:25.243772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.391 [2024-11-27 08:10:25.243778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.391 [2024-11-27 08:10:25.243793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.391 qpair failed and we were unable to recover it. 00:27:31.391 [2024-11-27 08:10:25.253680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.391 [2024-11-27 08:10:25.253766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.391 [2024-11-27 08:10:25.253780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.391 [2024-11-27 08:10:25.253787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.391 [2024-11-27 08:10:25.253794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.391 [2024-11-27 08:10:25.253809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.391 qpair failed and we were unable to recover it. 00:27:31.391 [2024-11-27 08:10:25.263684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.391 [2024-11-27 08:10:25.263742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.391 [2024-11-27 08:10:25.263756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.391 [2024-11-27 08:10:25.263764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.391 [2024-11-27 08:10:25.263771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.391 [2024-11-27 08:10:25.263786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.391 qpair failed and we were unable to recover it. 00:27:31.391 [2024-11-27 08:10:25.273732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.391 [2024-11-27 08:10:25.273791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.391 [2024-11-27 08:10:25.273805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.391 [2024-11-27 08:10:25.273812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.391 [2024-11-27 08:10:25.273818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.391 [2024-11-27 08:10:25.273834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.391 qpair failed and we were unable to recover it. 00:27:31.391 [2024-11-27 08:10:25.283781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.391 [2024-11-27 08:10:25.283855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.391 [2024-11-27 08:10:25.283870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.391 [2024-11-27 08:10:25.283877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.391 [2024-11-27 08:10:25.283884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.392 [2024-11-27 08:10:25.283899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.392 qpair failed and we were unable to recover it. 00:27:31.392 [2024-11-27 08:10:25.293781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.392 [2024-11-27 08:10:25.293844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.392 [2024-11-27 08:10:25.293859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.392 [2024-11-27 08:10:25.293866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.392 [2024-11-27 08:10:25.293873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.392 [2024-11-27 08:10:25.293888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.392 qpair failed and we were unable to recover it. 00:27:31.392 [2024-11-27 08:10:25.303803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.392 [2024-11-27 08:10:25.303861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.392 [2024-11-27 08:10:25.303875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.392 [2024-11-27 08:10:25.303882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.392 [2024-11-27 08:10:25.303889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.392 [2024-11-27 08:10:25.303904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.392 qpair failed and we were unable to recover it. 00:27:31.392 [2024-11-27 08:10:25.313829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.392 [2024-11-27 08:10:25.313889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.392 [2024-11-27 08:10:25.313903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.392 [2024-11-27 08:10:25.313910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.392 [2024-11-27 08:10:25.313917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.392 [2024-11-27 08:10:25.313933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.392 qpair failed and we were unable to recover it. 00:27:31.392 [2024-11-27 08:10:25.323851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.392 [2024-11-27 08:10:25.323916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.392 [2024-11-27 08:10:25.323933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.392 [2024-11-27 08:10:25.323940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.392 [2024-11-27 08:10:25.323949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.392 [2024-11-27 08:10:25.323966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.392 qpair failed and we were unable to recover it. 00:27:31.392 [2024-11-27 08:10:25.333937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.392 [2024-11-27 08:10:25.334000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.392 [2024-11-27 08:10:25.334014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.392 [2024-11-27 08:10:25.334021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.392 [2024-11-27 08:10:25.334028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.392 [2024-11-27 08:10:25.334044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.392 qpair failed and we were unable to recover it. 00:27:31.392 [2024-11-27 08:10:25.343964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.392 [2024-11-27 08:10:25.344022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.392 [2024-11-27 08:10:25.344036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.392 [2024-11-27 08:10:25.344043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.392 [2024-11-27 08:10:25.344050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.392 [2024-11-27 08:10:25.344065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.392 qpair failed and we were unable to recover it. 00:27:31.392 [2024-11-27 08:10:25.353952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.392 [2024-11-27 08:10:25.354010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.392 [2024-11-27 08:10:25.354024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.392 [2024-11-27 08:10:25.354031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.392 [2024-11-27 08:10:25.354037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.392 [2024-11-27 08:10:25.354053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.392 qpair failed and we were unable to recover it. 00:27:31.392 [2024-11-27 08:10:25.363999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.392 [2024-11-27 08:10:25.364068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.392 [2024-11-27 08:10:25.364083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.392 [2024-11-27 08:10:25.364090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.392 [2024-11-27 08:10:25.364100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.392 [2024-11-27 08:10:25.364115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.392 qpair failed and we were unable to recover it. 00:27:31.392 [2024-11-27 08:10:25.374033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.392 [2024-11-27 08:10:25.374091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.392 [2024-11-27 08:10:25.374105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.392 [2024-11-27 08:10:25.374112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.392 [2024-11-27 08:10:25.374119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.392 [2024-11-27 08:10:25.374134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.392 qpair failed and we were unable to recover it. 00:27:31.392 [2024-11-27 08:10:25.384056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.392 [2024-11-27 08:10:25.384165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.392 [2024-11-27 08:10:25.384180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.392 [2024-11-27 08:10:25.384187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.392 [2024-11-27 08:10:25.384193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.392 [2024-11-27 08:10:25.384210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.392 qpair failed and we were unable to recover it. 00:27:31.392 [2024-11-27 08:10:25.394091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.392 [2024-11-27 08:10:25.394146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.393 [2024-11-27 08:10:25.394160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.393 [2024-11-27 08:10:25.394167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.393 [2024-11-27 08:10:25.394173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.393 [2024-11-27 08:10:25.394189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.393 qpair failed and we were unable to recover it. 00:27:31.393 [2024-11-27 08:10:25.404061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.393 [2024-11-27 08:10:25.404123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.393 [2024-11-27 08:10:25.404137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.393 [2024-11-27 08:10:25.404144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.393 [2024-11-27 08:10:25.404151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.393 [2024-11-27 08:10:25.404167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.393 qpair failed and we were unable to recover it. 00:27:31.393 [2024-11-27 08:10:25.414145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.393 [2024-11-27 08:10:25.414207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.393 [2024-11-27 08:10:25.414221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.393 [2024-11-27 08:10:25.414228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.393 [2024-11-27 08:10:25.414234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.393 [2024-11-27 08:10:25.414248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.393 qpair failed and we were unable to recover it. 00:27:31.393 [2024-11-27 08:10:25.424200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.393 [2024-11-27 08:10:25.424271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.393 [2024-11-27 08:10:25.424284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.393 [2024-11-27 08:10:25.424291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.393 [2024-11-27 08:10:25.424298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.393 [2024-11-27 08:10:25.424313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.393 qpair failed and we were unable to recover it. 00:27:31.393 [2024-11-27 08:10:25.434201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.393 [2024-11-27 08:10:25.434298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.393 [2024-11-27 08:10:25.434311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.393 [2024-11-27 08:10:25.434318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.393 [2024-11-27 08:10:25.434325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.393 [2024-11-27 08:10:25.434340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.393 qpair failed and we were unable to recover it. 00:27:31.393 [2024-11-27 08:10:25.444186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.393 [2024-11-27 08:10:25.444268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.393 [2024-11-27 08:10:25.444282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.393 [2024-11-27 08:10:25.444289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.393 [2024-11-27 08:10:25.444295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.393 [2024-11-27 08:10:25.444311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.393 qpair failed and we were unable to recover it. 00:27:31.393 [2024-11-27 08:10:25.454253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.393 [2024-11-27 08:10:25.454312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.393 [2024-11-27 08:10:25.454329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.393 [2024-11-27 08:10:25.454337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.393 [2024-11-27 08:10:25.454343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.393 [2024-11-27 08:10:25.454359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.393 qpair failed and we were unable to recover it. 00:27:31.393 [2024-11-27 08:10:25.464245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.393 [2024-11-27 08:10:25.464303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.393 [2024-11-27 08:10:25.464317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.393 [2024-11-27 08:10:25.464324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.393 [2024-11-27 08:10:25.464330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.393 [2024-11-27 08:10:25.464346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.393 qpair failed and we were unable to recover it. 00:27:31.393 [2024-11-27 08:10:25.474325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.393 [2024-11-27 08:10:25.474409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.393 [2024-11-27 08:10:25.474422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.393 [2024-11-27 08:10:25.474429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.393 [2024-11-27 08:10:25.474436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.393 [2024-11-27 08:10:25.474452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.393 qpair failed and we were unable to recover it. 00:27:31.393 [2024-11-27 08:10:25.484341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.393 [2024-11-27 08:10:25.484404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.393 [2024-11-27 08:10:25.484418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.393 [2024-11-27 08:10:25.484425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.393 [2024-11-27 08:10:25.484432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.393 [2024-11-27 08:10:25.484447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.393 qpair failed and we were unable to recover it. 00:27:31.393 [2024-11-27 08:10:25.494381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.393 [2024-11-27 08:10:25.494467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.393 [2024-11-27 08:10:25.494481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.393 [2024-11-27 08:10:25.494488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.394 [2024-11-27 08:10:25.494498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.394 [2024-11-27 08:10:25.494513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.394 qpair failed and we were unable to recover it. 00:27:31.654 [2024-11-27 08:10:25.504425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.654 [2024-11-27 08:10:25.504477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.654 [2024-11-27 08:10:25.504491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.654 [2024-11-27 08:10:25.504498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.654 [2024-11-27 08:10:25.504504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.654 [2024-11-27 08:10:25.504521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.654 qpair failed and we were unable to recover it. 00:27:31.654 [2024-11-27 08:10:25.514373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.654 [2024-11-27 08:10:25.514431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.654 [2024-11-27 08:10:25.514446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.654 [2024-11-27 08:10:25.514453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.654 [2024-11-27 08:10:25.514461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.654 [2024-11-27 08:10:25.514475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.654 qpair failed and we were unable to recover it. 00:27:31.654 [2024-11-27 08:10:25.524371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.654 [2024-11-27 08:10:25.524431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.654 [2024-11-27 08:10:25.524444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.654 [2024-11-27 08:10:25.524452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.654 [2024-11-27 08:10:25.524459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.654 [2024-11-27 08:10:25.524474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.654 qpair failed and we were unable to recover it. 00:27:31.654 [2024-11-27 08:10:25.534433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.654 [2024-11-27 08:10:25.534493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.654 [2024-11-27 08:10:25.534507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.654 [2024-11-27 08:10:25.534515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.654 [2024-11-27 08:10:25.534521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.654 [2024-11-27 08:10:25.534536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.654 qpair failed and we were unable to recover it. 00:27:31.654 [2024-11-27 08:10:25.544435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.654 [2024-11-27 08:10:25.544498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.654 [2024-11-27 08:10:25.544512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.654 [2024-11-27 08:10:25.544520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.654 [2024-11-27 08:10:25.544527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.654 [2024-11-27 08:10:25.544542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.654 qpair failed and we were unable to recover it. 00:27:31.654 [2024-11-27 08:10:25.554550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.654 [2024-11-27 08:10:25.554606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.654 [2024-11-27 08:10:25.554619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.654 [2024-11-27 08:10:25.554627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.654 [2024-11-27 08:10:25.554634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.654 [2024-11-27 08:10:25.554649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.654 qpair failed and we were unable to recover it. 00:27:31.654 [2024-11-27 08:10:25.564501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.654 [2024-11-27 08:10:25.564563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.654 [2024-11-27 08:10:25.564577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.654 [2024-11-27 08:10:25.564584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.654 [2024-11-27 08:10:25.564590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.654 [2024-11-27 08:10:25.564605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.654 qpair failed and we were unable to recover it. 00:27:31.654 [2024-11-27 08:10:25.574531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.654 [2024-11-27 08:10:25.574590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.654 [2024-11-27 08:10:25.574603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.654 [2024-11-27 08:10:25.574611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.654 [2024-11-27 08:10:25.574618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.655 [2024-11-27 08:10:25.574633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.655 qpair failed and we were unable to recover it. 00:27:31.655 [2024-11-27 08:10:25.584628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.655 [2024-11-27 08:10:25.584709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.655 [2024-11-27 08:10:25.584724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.655 [2024-11-27 08:10:25.584731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.655 [2024-11-27 08:10:25.584737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.655 [2024-11-27 08:10:25.584753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.655 qpair failed and we were unable to recover it. 00:27:31.655 [2024-11-27 08:10:25.594634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.655 [2024-11-27 08:10:25.594744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.655 [2024-11-27 08:10:25.594759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.655 [2024-11-27 08:10:25.594767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.655 [2024-11-27 08:10:25.594773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.655 [2024-11-27 08:10:25.594789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.655 qpair failed and we were unable to recover it. 00:27:31.655 [2024-11-27 08:10:25.604721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.655 [2024-11-27 08:10:25.604779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.655 [2024-11-27 08:10:25.604795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.655 [2024-11-27 08:10:25.604802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.655 [2024-11-27 08:10:25.604809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.655 [2024-11-27 08:10:25.604825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.655 qpair failed and we were unable to recover it. 00:27:31.655 [2024-11-27 08:10:25.614706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.655 [2024-11-27 08:10:25.614779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.655 [2024-11-27 08:10:25.614794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.655 [2024-11-27 08:10:25.614801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.655 [2024-11-27 08:10:25.614807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.655 [2024-11-27 08:10:25.614822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.655 qpair failed and we were unable to recover it. 00:27:31.655 [2024-11-27 08:10:25.624740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.655 [2024-11-27 08:10:25.624803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.655 [2024-11-27 08:10:25.624818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.655 [2024-11-27 08:10:25.624829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.655 [2024-11-27 08:10:25.624835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.655 [2024-11-27 08:10:25.624850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.655 qpair failed and we were unable to recover it. 00:27:31.655 [2024-11-27 08:10:25.634701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.655 [2024-11-27 08:10:25.634755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.655 [2024-11-27 08:10:25.634770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.655 [2024-11-27 08:10:25.634777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.655 [2024-11-27 08:10:25.634783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.655 [2024-11-27 08:10:25.634799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.655 qpair failed and we were unable to recover it. 00:27:31.655 [2024-11-27 08:10:25.644743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.655 [2024-11-27 08:10:25.644831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.655 [2024-11-27 08:10:25.644845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.655 [2024-11-27 08:10:25.644852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.655 [2024-11-27 08:10:25.644859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.655 [2024-11-27 08:10:25.644874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.655 qpair failed and we were unable to recover it. 00:27:31.655 [2024-11-27 08:10:25.654893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.655 [2024-11-27 08:10:25.654962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.655 [2024-11-27 08:10:25.654977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.655 [2024-11-27 08:10:25.654985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.655 [2024-11-27 08:10:25.654991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.655 [2024-11-27 08:10:25.655007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.655 qpair failed and we were unable to recover it. 00:27:31.655 [2024-11-27 08:10:25.664790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.655 [2024-11-27 08:10:25.664855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.655 [2024-11-27 08:10:25.664869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.655 [2024-11-27 08:10:25.664877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.655 [2024-11-27 08:10:25.664884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.655 [2024-11-27 08:10:25.664902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.655 qpair failed and we were unable to recover it. 00:27:31.655 [2024-11-27 08:10:25.674833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.655 [2024-11-27 08:10:25.674928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.655 [2024-11-27 08:10:25.674943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.655 [2024-11-27 08:10:25.674954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.655 [2024-11-27 08:10:25.674961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.655 [2024-11-27 08:10:25.674976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.655 qpair failed and we were unable to recover it. 00:27:31.655 [2024-11-27 08:10:25.684896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.655 [2024-11-27 08:10:25.684958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.655 [2024-11-27 08:10:25.684973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.655 [2024-11-27 08:10:25.684980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.655 [2024-11-27 08:10:25.684986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.655 [2024-11-27 08:10:25.685002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.655 qpair failed and we were unable to recover it. 00:27:31.655 [2024-11-27 08:10:25.694883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.655 [2024-11-27 08:10:25.694942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.655 [2024-11-27 08:10:25.694960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.655 [2024-11-27 08:10:25.694967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.655 [2024-11-27 08:10:25.694974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.655 [2024-11-27 08:10:25.694989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.655 qpair failed and we were unable to recover it. 00:27:31.655 [2024-11-27 08:10:25.704963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.655 [2024-11-27 08:10:25.705017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.655 [2024-11-27 08:10:25.705031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.655 [2024-11-27 08:10:25.705038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.655 [2024-11-27 08:10:25.705045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.656 [2024-11-27 08:10:25.705060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.656 qpair failed and we were unable to recover it. 00:27:31.656 [2024-11-27 08:10:25.715027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.656 [2024-11-27 08:10:25.715121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.656 [2024-11-27 08:10:25.715137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.656 [2024-11-27 08:10:25.715144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.656 [2024-11-27 08:10:25.715152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.656 [2024-11-27 08:10:25.715167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.656 qpair failed and we were unable to recover it. 00:27:31.656 [2024-11-27 08:10:25.724961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.656 [2024-11-27 08:10:25.725023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.656 [2024-11-27 08:10:25.725037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.656 [2024-11-27 08:10:25.725044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.656 [2024-11-27 08:10:25.725052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.656 [2024-11-27 08:10:25.725067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.656 qpair failed and we were unable to recover it. 00:27:31.656 [2024-11-27 08:10:25.735093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.656 [2024-11-27 08:10:25.735148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.656 [2024-11-27 08:10:25.735163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.656 [2024-11-27 08:10:25.735170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.656 [2024-11-27 08:10:25.735178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.656 [2024-11-27 08:10:25.735194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.656 qpair failed and we were unable to recover it. 00:27:31.656 [2024-11-27 08:10:25.745057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.656 [2024-11-27 08:10:25.745112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.656 [2024-11-27 08:10:25.745127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.656 [2024-11-27 08:10:25.745134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.656 [2024-11-27 08:10:25.745140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.656 [2024-11-27 08:10:25.745155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.656 qpair failed and we were unable to recover it. 00:27:31.656 [2024-11-27 08:10:25.755096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.656 [2024-11-27 08:10:25.755171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.656 [2024-11-27 08:10:25.755189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.656 [2024-11-27 08:10:25.755196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.656 [2024-11-27 08:10:25.755203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.656 [2024-11-27 08:10:25.755218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.656 qpair failed and we were unable to recover it. 00:27:31.916 [2024-11-27 08:10:25.765195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.916 [2024-11-27 08:10:25.765255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.916 [2024-11-27 08:10:25.765269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.916 [2024-11-27 08:10:25.765276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.916 [2024-11-27 08:10:25.765283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.916 [2024-11-27 08:10:25.765299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.916 qpair failed and we were unable to recover it. 00:27:31.916 [2024-11-27 08:10:25.775162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.916 [2024-11-27 08:10:25.775223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.916 [2024-11-27 08:10:25.775239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.916 [2024-11-27 08:10:25.775247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.916 [2024-11-27 08:10:25.775254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.916 [2024-11-27 08:10:25.775269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.916 qpair failed and we were unable to recover it. 00:27:31.916 [2024-11-27 08:10:25.785234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.916 [2024-11-27 08:10:25.785312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.916 [2024-11-27 08:10:25.785326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.916 [2024-11-27 08:10:25.785333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.916 [2024-11-27 08:10:25.785339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.916 [2024-11-27 08:10:25.785354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.916 qpair failed and we were unable to recover it. 00:27:31.916 [2024-11-27 08:10:25.795224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.916 [2024-11-27 08:10:25.795283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.916 [2024-11-27 08:10:25.795297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.916 [2024-11-27 08:10:25.795305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.916 [2024-11-27 08:10:25.795312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.916 [2024-11-27 08:10:25.795330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.916 qpair failed and we were unable to recover it. 00:27:31.916 [2024-11-27 08:10:25.805287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.916 [2024-11-27 08:10:25.805363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.916 [2024-11-27 08:10:25.805378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.916 [2024-11-27 08:10:25.805385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.916 [2024-11-27 08:10:25.805391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.916 [2024-11-27 08:10:25.805407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.916 qpair failed and we were unable to recover it. 00:27:31.916 [2024-11-27 08:10:25.815328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.916 [2024-11-27 08:10:25.815386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.916 [2024-11-27 08:10:25.815400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.916 [2024-11-27 08:10:25.815408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.916 [2024-11-27 08:10:25.815415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.916 [2024-11-27 08:10:25.815429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.916 qpair failed and we were unable to recover it. 00:27:31.916 [2024-11-27 08:10:25.825315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.916 [2024-11-27 08:10:25.825373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.916 [2024-11-27 08:10:25.825387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.916 [2024-11-27 08:10:25.825394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.916 [2024-11-27 08:10:25.825400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.916 [2024-11-27 08:10:25.825416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.916 qpair failed and we were unable to recover it. 00:27:31.916 [2024-11-27 08:10:25.835331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.916 [2024-11-27 08:10:25.835395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.917 [2024-11-27 08:10:25.835409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.917 [2024-11-27 08:10:25.835417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.917 [2024-11-27 08:10:25.835424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.917 [2024-11-27 08:10:25.835439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.917 qpair failed and we were unable to recover it. 00:27:31.917 [2024-11-27 08:10:25.845363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.917 [2024-11-27 08:10:25.845432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.917 [2024-11-27 08:10:25.845446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.917 [2024-11-27 08:10:25.845454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.917 [2024-11-27 08:10:25.845460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.917 [2024-11-27 08:10:25.845475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.917 qpair failed and we were unable to recover it. 00:27:31.917 [2024-11-27 08:10:25.855393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.917 [2024-11-27 08:10:25.855451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.917 [2024-11-27 08:10:25.855464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.917 [2024-11-27 08:10:25.855471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.917 [2024-11-27 08:10:25.855479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.917 [2024-11-27 08:10:25.855494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.917 qpair failed and we were unable to recover it. 00:27:31.917 [2024-11-27 08:10:25.865425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.917 [2024-11-27 08:10:25.865482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.917 [2024-11-27 08:10:25.865496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.917 [2024-11-27 08:10:25.865504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.917 [2024-11-27 08:10:25.865511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.917 [2024-11-27 08:10:25.865526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.917 qpair failed and we were unable to recover it. 00:27:31.917 [2024-11-27 08:10:25.875452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.917 [2024-11-27 08:10:25.875520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.917 [2024-11-27 08:10:25.875534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.917 [2024-11-27 08:10:25.875542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.917 [2024-11-27 08:10:25.875548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.917 [2024-11-27 08:10:25.875563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.917 qpair failed and we were unable to recover it. 00:27:31.917 [2024-11-27 08:10:25.885520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.917 [2024-11-27 08:10:25.885576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.917 [2024-11-27 08:10:25.885592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.917 [2024-11-27 08:10:25.885600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.917 [2024-11-27 08:10:25.885607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.917 [2024-11-27 08:10:25.885622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.917 qpair failed and we were unable to recover it. 00:27:31.917 [2024-11-27 08:10:25.895659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.917 [2024-11-27 08:10:25.895721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.917 [2024-11-27 08:10:25.895736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.917 [2024-11-27 08:10:25.895743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.917 [2024-11-27 08:10:25.895749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.917 [2024-11-27 08:10:25.895763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.917 qpair failed and we were unable to recover it. 00:27:31.917 [2024-11-27 08:10:25.905588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.917 [2024-11-27 08:10:25.905646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.917 [2024-11-27 08:10:25.905663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.917 [2024-11-27 08:10:25.905670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.917 [2024-11-27 08:10:25.905677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.917 [2024-11-27 08:10:25.905693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.917 qpair failed and we were unable to recover it. 00:27:31.917 [2024-11-27 08:10:25.915534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.917 [2024-11-27 08:10:25.915590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.917 [2024-11-27 08:10:25.915605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.917 [2024-11-27 08:10:25.915612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.917 [2024-11-27 08:10:25.915619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.917 [2024-11-27 08:10:25.915636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.917 qpair failed and we were unable to recover it. 00:27:31.917 [2024-11-27 08:10:25.925642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.917 [2024-11-27 08:10:25.925704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.917 [2024-11-27 08:10:25.925718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.917 [2024-11-27 08:10:25.925725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.917 [2024-11-27 08:10:25.925735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.917 [2024-11-27 08:10:25.925750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.917 qpair failed and we were unable to recover it. 00:27:31.917 [2024-11-27 08:10:25.935642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.917 [2024-11-27 08:10:25.935704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.917 [2024-11-27 08:10:25.935721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.917 [2024-11-27 08:10:25.935729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.917 [2024-11-27 08:10:25.935736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.917 [2024-11-27 08:10:25.935752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.917 qpair failed and we were unable to recover it. 00:27:31.917 [2024-11-27 08:10:25.945659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.917 [2024-11-27 08:10:25.945717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.917 [2024-11-27 08:10:25.945732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.917 [2024-11-27 08:10:25.945739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.917 [2024-11-27 08:10:25.945746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.917 [2024-11-27 08:10:25.945761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.917 qpair failed and we were unable to recover it. 00:27:31.917 [2024-11-27 08:10:25.955691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.917 [2024-11-27 08:10:25.955748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.917 [2024-11-27 08:10:25.955762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.917 [2024-11-27 08:10:25.955770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.917 [2024-11-27 08:10:25.955777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.917 [2024-11-27 08:10:25.955792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.917 qpair failed and we were unable to recover it. 00:27:31.917 [2024-11-27 08:10:25.965731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.917 [2024-11-27 08:10:25.965794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.918 [2024-11-27 08:10:25.965808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.918 [2024-11-27 08:10:25.965815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.918 [2024-11-27 08:10:25.965821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.918 [2024-11-27 08:10:25.965836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.918 qpair failed and we were unable to recover it. 00:27:31.918 [2024-11-27 08:10:25.975740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.918 [2024-11-27 08:10:25.975821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.918 [2024-11-27 08:10:25.975835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.918 [2024-11-27 08:10:25.975843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.918 [2024-11-27 08:10:25.975849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.918 [2024-11-27 08:10:25.975864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.918 qpair failed and we were unable to recover it. 00:27:31.918 [2024-11-27 08:10:25.985778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.918 [2024-11-27 08:10:25.985829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.918 [2024-11-27 08:10:25.985844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.918 [2024-11-27 08:10:25.985850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.918 [2024-11-27 08:10:25.985857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.918 [2024-11-27 08:10:25.985872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.918 qpair failed and we were unable to recover it. 00:27:31.918 [2024-11-27 08:10:25.995775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.918 [2024-11-27 08:10:25.995881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.918 [2024-11-27 08:10:25.995895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.918 [2024-11-27 08:10:25.995903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.918 [2024-11-27 08:10:25.995909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.918 [2024-11-27 08:10:25.995926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.918 qpair failed and we were unable to recover it. 00:27:31.918 [2024-11-27 08:10:26.005846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.918 [2024-11-27 08:10:26.005906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.918 [2024-11-27 08:10:26.005921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.918 [2024-11-27 08:10:26.005928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.918 [2024-11-27 08:10:26.005936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.918 [2024-11-27 08:10:26.005957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.918 qpair failed and we were unable to recover it. 00:27:31.918 [2024-11-27 08:10:26.015919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.918 [2024-11-27 08:10:26.015976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.918 [2024-11-27 08:10:26.015997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.918 [2024-11-27 08:10:26.016005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.918 [2024-11-27 08:10:26.016012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:31.918 [2024-11-27 08:10:26.016027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.918 qpair failed and we were unable to recover it. 00:27:32.176 [2024-11-27 08:10:26.025906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.176 [2024-11-27 08:10:26.025965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.176 [2024-11-27 08:10:26.025979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.176 [2024-11-27 08:10:26.025987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.176 [2024-11-27 08:10:26.025993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.176 [2024-11-27 08:10:26.026008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.176 qpair failed and we were unable to recover it. 00:27:32.176 [2024-11-27 08:10:26.035979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.176 [2024-11-27 08:10:26.036039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.176 [2024-11-27 08:10:26.036075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.176 [2024-11-27 08:10:26.036083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.176 [2024-11-27 08:10:26.036090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.176 [2024-11-27 08:10:26.036114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.176 qpair failed and we were unable to recover it. 00:27:32.176 [2024-11-27 08:10:26.045979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.176 [2024-11-27 08:10:26.046039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.177 [2024-11-27 08:10:26.046054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.177 [2024-11-27 08:10:26.046062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.177 [2024-11-27 08:10:26.046068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.177 [2024-11-27 08:10:26.046085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.177 qpair failed and we were unable to recover it. 00:27:32.177 [2024-11-27 08:10:26.055969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.177 [2024-11-27 08:10:26.056030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.177 [2024-11-27 08:10:26.056046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.177 [2024-11-27 08:10:26.056057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.177 [2024-11-27 08:10:26.056065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.177 [2024-11-27 08:10:26.056081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.177 qpair failed and we were unable to recover it. 00:27:32.177 [2024-11-27 08:10:26.066022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.177 [2024-11-27 08:10:26.066080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.177 [2024-11-27 08:10:26.066094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.177 [2024-11-27 08:10:26.066102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.177 [2024-11-27 08:10:26.066109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.177 [2024-11-27 08:10:26.066124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.177 qpair failed and we were unable to recover it. 00:27:32.177 [2024-11-27 08:10:26.075981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.177 [2024-11-27 08:10:26.076049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.177 [2024-11-27 08:10:26.076064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.177 [2024-11-27 08:10:26.076071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.177 [2024-11-27 08:10:26.076078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.177 [2024-11-27 08:10:26.076092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.177 qpair failed and we were unable to recover it. 00:27:32.177 [2024-11-27 08:10:26.086096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.177 [2024-11-27 08:10:26.086153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.177 [2024-11-27 08:10:26.086167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.177 [2024-11-27 08:10:26.086174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.177 [2024-11-27 08:10:26.086180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.177 [2024-11-27 08:10:26.086196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.177 qpair failed and we were unable to recover it. 00:27:32.177 [2024-11-27 08:10:26.096065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.177 [2024-11-27 08:10:26.096118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.177 [2024-11-27 08:10:26.096132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.177 [2024-11-27 08:10:26.096138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.177 [2024-11-27 08:10:26.096145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.177 [2024-11-27 08:10:26.096161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.177 qpair failed and we were unable to recover it. 00:27:32.177 [2024-11-27 08:10:26.106172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.177 [2024-11-27 08:10:26.106231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.177 [2024-11-27 08:10:26.106246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.177 [2024-11-27 08:10:26.106254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.177 [2024-11-27 08:10:26.106261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.177 [2024-11-27 08:10:26.106275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.177 qpair failed and we were unable to recover it. 00:27:32.177 [2024-11-27 08:10:26.116187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.177 [2024-11-27 08:10:26.116265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.177 [2024-11-27 08:10:26.116279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.177 [2024-11-27 08:10:26.116286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.177 [2024-11-27 08:10:26.116293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.177 [2024-11-27 08:10:26.116308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.177 qpair failed and we were unable to recover it. 00:27:32.177 [2024-11-27 08:10:26.126191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.177 [2024-11-27 08:10:26.126268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.177 [2024-11-27 08:10:26.126282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.177 [2024-11-27 08:10:26.126289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.177 [2024-11-27 08:10:26.126295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.177 [2024-11-27 08:10:26.126311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.177 qpair failed and we were unable to recover it. 00:27:32.177 [2024-11-27 08:10:26.136245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.177 [2024-11-27 08:10:26.136351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.177 [2024-11-27 08:10:26.136365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.177 [2024-11-27 08:10:26.136373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.177 [2024-11-27 08:10:26.136380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.177 [2024-11-27 08:10:26.136395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.177 qpair failed and we were unable to recover it. 00:27:32.177 [2024-11-27 08:10:26.146245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.177 [2024-11-27 08:10:26.146301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.177 [2024-11-27 08:10:26.146315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.177 [2024-11-27 08:10:26.146322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.177 [2024-11-27 08:10:26.146329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.177 [2024-11-27 08:10:26.146344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.177 qpair failed and we were unable to recover it. 00:27:32.177 [2024-11-27 08:10:26.156274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.177 [2024-11-27 08:10:26.156333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.177 [2024-11-27 08:10:26.156348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.177 [2024-11-27 08:10:26.156355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.177 [2024-11-27 08:10:26.156362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.177 [2024-11-27 08:10:26.156377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.177 qpair failed and we were unable to recover it. 00:27:32.177 [2024-11-27 08:10:26.166352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.177 [2024-11-27 08:10:26.166429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.177 [2024-11-27 08:10:26.166443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.177 [2024-11-27 08:10:26.166450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.177 [2024-11-27 08:10:26.166457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.177 [2024-11-27 08:10:26.166472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.177 qpair failed and we were unable to recover it. 00:27:32.177 [2024-11-27 08:10:26.176337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.177 [2024-11-27 08:10:26.176397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.178 [2024-11-27 08:10:26.176411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.178 [2024-11-27 08:10:26.176418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.178 [2024-11-27 08:10:26.176424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.178 [2024-11-27 08:10:26.176440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.178 qpair failed and we were unable to recover it. 00:27:32.178 [2024-11-27 08:10:26.186359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.178 [2024-11-27 08:10:26.186424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.178 [2024-11-27 08:10:26.186438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.178 [2024-11-27 08:10:26.186449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.178 [2024-11-27 08:10:26.186455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.178 [2024-11-27 08:10:26.186469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.178 qpair failed and we were unable to recover it. 00:27:32.178 [2024-11-27 08:10:26.196381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.178 [2024-11-27 08:10:26.196440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.178 [2024-11-27 08:10:26.196454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.178 [2024-11-27 08:10:26.196461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.178 [2024-11-27 08:10:26.196468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.178 [2024-11-27 08:10:26.196483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.178 qpair failed and we were unable to recover it. 00:27:32.178 [2024-11-27 08:10:26.206412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.178 [2024-11-27 08:10:26.206470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.178 [2024-11-27 08:10:26.206483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.178 [2024-11-27 08:10:26.206490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.178 [2024-11-27 08:10:26.206497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.178 [2024-11-27 08:10:26.206512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.178 qpair failed and we were unable to recover it. 00:27:32.178 [2024-11-27 08:10:26.216378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.178 [2024-11-27 08:10:26.216454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.178 [2024-11-27 08:10:26.216469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.178 [2024-11-27 08:10:26.216476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.178 [2024-11-27 08:10:26.216482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.178 [2024-11-27 08:10:26.216497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.178 qpair failed and we were unable to recover it. 00:27:32.178 [2024-11-27 08:10:26.226427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.178 [2024-11-27 08:10:26.226494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.178 [2024-11-27 08:10:26.226508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.178 [2024-11-27 08:10:26.226516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.178 [2024-11-27 08:10:26.226522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.178 [2024-11-27 08:10:26.226540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.178 qpair failed and we were unable to recover it. 00:27:32.178 [2024-11-27 08:10:26.236548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.178 [2024-11-27 08:10:26.236649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.178 [2024-11-27 08:10:26.236663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.178 [2024-11-27 08:10:26.236670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.178 [2024-11-27 08:10:26.236678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.178 [2024-11-27 08:10:26.236694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.178 qpair failed and we were unable to recover it. 00:27:32.178 [2024-11-27 08:10:26.246569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.178 [2024-11-27 08:10:26.246638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.178 [2024-11-27 08:10:26.246652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.178 [2024-11-27 08:10:26.246660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.178 [2024-11-27 08:10:26.246667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.178 [2024-11-27 08:10:26.246682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.178 qpair failed and we were unable to recover it. 00:27:32.178 [2024-11-27 08:10:26.256568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.178 [2024-11-27 08:10:26.256638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.178 [2024-11-27 08:10:26.256653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.178 [2024-11-27 08:10:26.256660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.178 [2024-11-27 08:10:26.256666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.178 [2024-11-27 08:10:26.256681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.178 qpair failed and we were unable to recover it. 00:27:32.178 [2024-11-27 08:10:26.266663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.178 [2024-11-27 08:10:26.266720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.178 [2024-11-27 08:10:26.266734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.178 [2024-11-27 08:10:26.266741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.178 [2024-11-27 08:10:26.266747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.178 [2024-11-27 08:10:26.266762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.178 qpair failed and we were unable to recover it. 00:27:32.178 [2024-11-27 08:10:26.276677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.178 [2024-11-27 08:10:26.276778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.178 [2024-11-27 08:10:26.276792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.178 [2024-11-27 08:10:26.276799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.178 [2024-11-27 08:10:26.276806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.178 [2024-11-27 08:10:26.276821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.178 qpair failed and we were unable to recover it. 00:27:32.437 [2024-11-27 08:10:26.286661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.437 [2024-11-27 08:10:26.286721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.437 [2024-11-27 08:10:26.286735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.437 [2024-11-27 08:10:26.286742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.437 [2024-11-27 08:10:26.286749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.437 [2024-11-27 08:10:26.286764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.437 qpair failed and we were unable to recover it. 00:27:32.437 [2024-11-27 08:10:26.296685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.437 [2024-11-27 08:10:26.296745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.437 [2024-11-27 08:10:26.296760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.437 [2024-11-27 08:10:26.296768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.437 [2024-11-27 08:10:26.296775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.437 [2024-11-27 08:10:26.296791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.437 qpair failed and we were unable to recover it. 00:27:32.437 [2024-11-27 08:10:26.306728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.437 [2024-11-27 08:10:26.306802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.437 [2024-11-27 08:10:26.306816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.437 [2024-11-27 08:10:26.306824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.437 [2024-11-27 08:10:26.306830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.437 [2024-11-27 08:10:26.306845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.437 qpair failed and we were unable to recover it. 00:27:32.437 [2024-11-27 08:10:26.316744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.437 [2024-11-27 08:10:26.316833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.437 [2024-11-27 08:10:26.316853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.437 [2024-11-27 08:10:26.316860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.437 [2024-11-27 08:10:26.316866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.437 [2024-11-27 08:10:26.316883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.437 qpair failed and we were unable to recover it. 00:27:32.437 [2024-11-27 08:10:26.326765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.437 [2024-11-27 08:10:26.326835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.437 [2024-11-27 08:10:26.326850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.437 [2024-11-27 08:10:26.326857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.437 [2024-11-27 08:10:26.326863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.437 [2024-11-27 08:10:26.326879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.437 qpair failed and we were unable to recover it. 00:27:32.437 [2024-11-27 08:10:26.336786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.437 [2024-11-27 08:10:26.336866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.437 [2024-11-27 08:10:26.336881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.437 [2024-11-27 08:10:26.336889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.437 [2024-11-27 08:10:26.336896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.437 [2024-11-27 08:10:26.336912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.437 qpair failed and we were unable to recover it. 00:27:32.437 [2024-11-27 08:10:26.346808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.437 [2024-11-27 08:10:26.346877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.437 [2024-11-27 08:10:26.346891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.437 [2024-11-27 08:10:26.346898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.437 [2024-11-27 08:10:26.346904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.437 [2024-11-27 08:10:26.346920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.437 qpair failed and we were unable to recover it. 00:27:32.437 [2024-11-27 08:10:26.356844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.437 [2024-11-27 08:10:26.356895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.437 [2024-11-27 08:10:26.356909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.437 [2024-11-27 08:10:26.356916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.437 [2024-11-27 08:10:26.356922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.437 [2024-11-27 08:10:26.356941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.437 qpair failed and we were unable to recover it. 00:27:32.437 [2024-11-27 08:10:26.366904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.437 [2024-11-27 08:10:26.366984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.437 [2024-11-27 08:10:26.366998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.437 [2024-11-27 08:10:26.367005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.437 [2024-11-27 08:10:26.367011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.437 [2024-11-27 08:10:26.367027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.437 qpair failed and we were unable to recover it. 00:27:32.437 [2024-11-27 08:10:26.376904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.437 [2024-11-27 08:10:26.376960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.437 [2024-11-27 08:10:26.376975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.437 [2024-11-27 08:10:26.376982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.437 [2024-11-27 08:10:26.376989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.437 [2024-11-27 08:10:26.377005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.437 qpair failed and we were unable to recover it. 00:27:32.437 [2024-11-27 08:10:26.386925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.437 [2024-11-27 08:10:26.386984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.437 [2024-11-27 08:10:26.386998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.437 [2024-11-27 08:10:26.387005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.437 [2024-11-27 08:10:26.387012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.437 [2024-11-27 08:10:26.387028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.437 qpair failed and we were unable to recover it. 00:27:32.437 [2024-11-27 08:10:26.396962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.437 [2024-11-27 08:10:26.397013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.437 [2024-11-27 08:10:26.397027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.437 [2024-11-27 08:10:26.397034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.437 [2024-11-27 08:10:26.397041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.437 [2024-11-27 08:10:26.397056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.438 qpair failed and we were unable to recover it. 00:27:32.438 [2024-11-27 08:10:26.407003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.438 [2024-11-27 08:10:26.407063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.438 [2024-11-27 08:10:26.407077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.438 [2024-11-27 08:10:26.407084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.438 [2024-11-27 08:10:26.407091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.438 [2024-11-27 08:10:26.407106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.438 qpair failed and we were unable to recover it. 00:27:32.438 [2024-11-27 08:10:26.416961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.438 [2024-11-27 08:10:26.417033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.438 [2024-11-27 08:10:26.417047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.438 [2024-11-27 08:10:26.417055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.438 [2024-11-27 08:10:26.417061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.438 [2024-11-27 08:10:26.417076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.438 qpair failed and we were unable to recover it. 00:27:32.438 [2024-11-27 08:10:26.427052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.438 [2024-11-27 08:10:26.427105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.438 [2024-11-27 08:10:26.427119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.438 [2024-11-27 08:10:26.427126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.438 [2024-11-27 08:10:26.427133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.438 [2024-11-27 08:10:26.427149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.438 qpair failed and we were unable to recover it. 00:27:32.438 [2024-11-27 08:10:26.437129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.438 [2024-11-27 08:10:26.437185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.438 [2024-11-27 08:10:26.437200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.438 [2024-11-27 08:10:26.437207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.438 [2024-11-27 08:10:26.437214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.438 [2024-11-27 08:10:26.437229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.438 qpair failed and we were unable to recover it. 00:27:32.438 [2024-11-27 08:10:26.447141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.438 [2024-11-27 08:10:26.447215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.438 [2024-11-27 08:10:26.447232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.438 [2024-11-27 08:10:26.447240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.438 [2024-11-27 08:10:26.447245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.438 [2024-11-27 08:10:26.447260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.438 qpair failed and we were unable to recover it. 00:27:32.438 [2024-11-27 08:10:26.457150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.438 [2024-11-27 08:10:26.457221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.438 [2024-11-27 08:10:26.457235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.438 [2024-11-27 08:10:26.457242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.438 [2024-11-27 08:10:26.457249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.438 [2024-11-27 08:10:26.457264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.438 qpair failed and we were unable to recover it. 00:27:32.438 [2024-11-27 08:10:26.467227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.438 [2024-11-27 08:10:26.467334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.438 [2024-11-27 08:10:26.467348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.438 [2024-11-27 08:10:26.467355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.438 [2024-11-27 08:10:26.467362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.438 [2024-11-27 08:10:26.467377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.438 qpair failed and we were unable to recover it. 00:27:32.438 [2024-11-27 08:10:26.477194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.438 [2024-11-27 08:10:26.477253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.438 [2024-11-27 08:10:26.477266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.438 [2024-11-27 08:10:26.477274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.438 [2024-11-27 08:10:26.477281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.438 [2024-11-27 08:10:26.477296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.438 qpair failed and we were unable to recover it. 00:27:32.438 [2024-11-27 08:10:26.487244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.438 [2024-11-27 08:10:26.487305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.438 [2024-11-27 08:10:26.487320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.438 [2024-11-27 08:10:26.487328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.438 [2024-11-27 08:10:26.487339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.438 [2024-11-27 08:10:26.487354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.438 qpair failed and we were unable to recover it. 00:27:32.438 [2024-11-27 08:10:26.497195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.438 [2024-11-27 08:10:26.497255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.438 [2024-11-27 08:10:26.497268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.438 [2024-11-27 08:10:26.497276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.438 [2024-11-27 08:10:26.497282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.438 [2024-11-27 08:10:26.497297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.438 qpair failed and we were unable to recover it. 00:27:32.438 [2024-11-27 08:10:26.507282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.438 [2024-11-27 08:10:26.507337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.438 [2024-11-27 08:10:26.507351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.438 [2024-11-27 08:10:26.507357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.438 [2024-11-27 08:10:26.507364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.438 [2024-11-27 08:10:26.507379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.438 qpair failed and we were unable to recover it. 00:27:32.438 [2024-11-27 08:10:26.517320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.438 [2024-11-27 08:10:26.517391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.438 [2024-11-27 08:10:26.517406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.438 [2024-11-27 08:10:26.517414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.438 [2024-11-27 08:10:26.517421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.438 [2024-11-27 08:10:26.517435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.438 qpair failed and we were unable to recover it. 00:27:32.438 [2024-11-27 08:10:26.527355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.438 [2024-11-27 08:10:26.527416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.438 [2024-11-27 08:10:26.527430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.438 [2024-11-27 08:10:26.527437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.438 [2024-11-27 08:10:26.527444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.438 [2024-11-27 08:10:26.527459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.438 qpair failed and we were unable to recover it. 00:27:32.439 [2024-11-27 08:10:26.537379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.439 [2024-11-27 08:10:26.537437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.439 [2024-11-27 08:10:26.537452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.439 [2024-11-27 08:10:26.537459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.439 [2024-11-27 08:10:26.537466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.439 [2024-11-27 08:10:26.537481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.439 qpair failed and we were unable to recover it. 00:27:32.696 [2024-11-27 08:10:26.547403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.696 [2024-11-27 08:10:26.547463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.696 [2024-11-27 08:10:26.547479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.697 [2024-11-27 08:10:26.547486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.697 [2024-11-27 08:10:26.547492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.697 [2024-11-27 08:10:26.547508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.697 qpair failed and we were unable to recover it. 00:27:32.697 [2024-11-27 08:10:26.557435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.697 [2024-11-27 08:10:26.557496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.697 [2024-11-27 08:10:26.557510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.697 [2024-11-27 08:10:26.557518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.697 [2024-11-27 08:10:26.557525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.697 [2024-11-27 08:10:26.557540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.697 qpair failed and we were unable to recover it. 00:27:32.697 [2024-11-27 08:10:26.567466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.697 [2024-11-27 08:10:26.567525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.697 [2024-11-27 08:10:26.567540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.697 [2024-11-27 08:10:26.567547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.697 [2024-11-27 08:10:26.567553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.697 [2024-11-27 08:10:26.567569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.697 qpair failed and we were unable to recover it. 00:27:32.697 [2024-11-27 08:10:26.577500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.697 [2024-11-27 08:10:26.577560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.697 [2024-11-27 08:10:26.577578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.697 [2024-11-27 08:10:26.577586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.697 [2024-11-27 08:10:26.577593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.697 [2024-11-27 08:10:26.577608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.697 qpair failed and we were unable to recover it. 00:27:32.697 [2024-11-27 08:10:26.587534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.697 [2024-11-27 08:10:26.587590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.697 [2024-11-27 08:10:26.587604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.697 [2024-11-27 08:10:26.587611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.697 [2024-11-27 08:10:26.587617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.697 [2024-11-27 08:10:26.587632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.697 qpair failed and we were unable to recover it. 00:27:32.697 [2024-11-27 08:10:26.597552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.697 [2024-11-27 08:10:26.597608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.697 [2024-11-27 08:10:26.597623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.697 [2024-11-27 08:10:26.597630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.697 [2024-11-27 08:10:26.597637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.697 [2024-11-27 08:10:26.597652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.697 qpair failed and we were unable to recover it. 00:27:32.697 [2024-11-27 08:10:26.607581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.697 [2024-11-27 08:10:26.607638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.697 [2024-11-27 08:10:26.607651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.697 [2024-11-27 08:10:26.607658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.697 [2024-11-27 08:10:26.607665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.697 [2024-11-27 08:10:26.607680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.697 qpair failed and we were unable to recover it. 00:27:32.697 [2024-11-27 08:10:26.617616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.697 [2024-11-27 08:10:26.617672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.697 [2024-11-27 08:10:26.617686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.697 [2024-11-27 08:10:26.617696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.697 [2024-11-27 08:10:26.617703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.697 [2024-11-27 08:10:26.617718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.697 qpair failed and we were unable to recover it. 00:27:32.697 [2024-11-27 08:10:26.627652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.697 [2024-11-27 08:10:26.627719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.697 [2024-11-27 08:10:26.627734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.697 [2024-11-27 08:10:26.627741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.697 [2024-11-27 08:10:26.627747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.697 [2024-11-27 08:10:26.627762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.697 qpair failed and we were unable to recover it. 00:27:32.697 [2024-11-27 08:10:26.637658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.697 [2024-11-27 08:10:26.637715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.697 [2024-11-27 08:10:26.637729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.697 [2024-11-27 08:10:26.637737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.697 [2024-11-27 08:10:26.637744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.697 [2024-11-27 08:10:26.637760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.697 qpair failed and we were unable to recover it. 00:27:32.697 [2024-11-27 08:10:26.647771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.698 [2024-11-27 08:10:26.647880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.698 [2024-11-27 08:10:26.647894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.698 [2024-11-27 08:10:26.647901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.698 [2024-11-27 08:10:26.647908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.698 [2024-11-27 08:10:26.647923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.698 qpair failed and we were unable to recover it. 00:27:32.698 [2024-11-27 08:10:26.657747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.698 [2024-11-27 08:10:26.657816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.698 [2024-11-27 08:10:26.657831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.698 [2024-11-27 08:10:26.657838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.698 [2024-11-27 08:10:26.657844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.698 [2024-11-27 08:10:26.657859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.698 qpair failed and we were unable to recover it. 00:27:32.698 [2024-11-27 08:10:26.667673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.698 [2024-11-27 08:10:26.667735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.698 [2024-11-27 08:10:26.667749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.698 [2024-11-27 08:10:26.667757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.698 [2024-11-27 08:10:26.667763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.698 [2024-11-27 08:10:26.667779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.698 qpair failed and we were unable to recover it. 00:27:32.698 [2024-11-27 08:10:26.677774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.698 [2024-11-27 08:10:26.677828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.698 [2024-11-27 08:10:26.677842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.698 [2024-11-27 08:10:26.677849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.698 [2024-11-27 08:10:26.677856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.698 [2024-11-27 08:10:26.677872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.698 qpair failed and we were unable to recover it. 00:27:32.698 [2024-11-27 08:10:26.687833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.698 [2024-11-27 08:10:26.687893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.698 [2024-11-27 08:10:26.687907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.698 [2024-11-27 08:10:26.687916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.698 [2024-11-27 08:10:26.687922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.698 [2024-11-27 08:10:26.687938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.698 qpair failed and we were unable to recover it. 00:27:32.698 [2024-11-27 08:10:26.697834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.698 [2024-11-27 08:10:26.697894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.698 [2024-11-27 08:10:26.697908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.698 [2024-11-27 08:10:26.697915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.698 [2024-11-27 08:10:26.697922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.698 [2024-11-27 08:10:26.697937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.698 qpair failed and we were unable to recover it. 00:27:32.698 [2024-11-27 08:10:26.707862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.698 [2024-11-27 08:10:26.707924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.698 [2024-11-27 08:10:26.707938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.698 [2024-11-27 08:10:26.707946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.698 [2024-11-27 08:10:26.707956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.698 [2024-11-27 08:10:26.707972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.698 qpair failed and we were unable to recover it. 00:27:32.698 [2024-11-27 08:10:26.717886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.698 [2024-11-27 08:10:26.717946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.698 [2024-11-27 08:10:26.717965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.698 [2024-11-27 08:10:26.717972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.698 [2024-11-27 08:10:26.717979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.698 [2024-11-27 08:10:26.717995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.698 qpair failed and we were unable to recover it. 00:27:32.698 [2024-11-27 08:10:26.727921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.698 [2024-11-27 08:10:26.727987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.698 [2024-11-27 08:10:26.728002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.698 [2024-11-27 08:10:26.728010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.698 [2024-11-27 08:10:26.728017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.698 [2024-11-27 08:10:26.728032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.698 qpair failed and we were unable to recover it. 00:27:32.698 [2024-11-27 08:10:26.738020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.698 [2024-11-27 08:10:26.738124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.698 [2024-11-27 08:10:26.738139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.698 [2024-11-27 08:10:26.738146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.698 [2024-11-27 08:10:26.738153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.698 [2024-11-27 08:10:26.738169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.698 qpair failed and we were unable to recover it. 00:27:32.698 [2024-11-27 08:10:26.747981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.699 [2024-11-27 08:10:26.748042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.699 [2024-11-27 08:10:26.748056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.699 [2024-11-27 08:10:26.748068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.699 [2024-11-27 08:10:26.748074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.699 [2024-11-27 08:10:26.748089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.699 qpair failed and we were unable to recover it. 00:27:32.699 [2024-11-27 08:10:26.758011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.699 [2024-11-27 08:10:26.758070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.699 [2024-11-27 08:10:26.758092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.699 [2024-11-27 08:10:26.758100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.699 [2024-11-27 08:10:26.758106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.699 [2024-11-27 08:10:26.758123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.699 qpair failed and we were unable to recover it. 00:27:32.699 [2024-11-27 08:10:26.768074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.699 [2024-11-27 08:10:26.768137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.699 [2024-11-27 08:10:26.768152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.699 [2024-11-27 08:10:26.768159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.699 [2024-11-27 08:10:26.768166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.699 [2024-11-27 08:10:26.768182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.699 qpair failed and we were unable to recover it. 00:27:32.699 [2024-11-27 08:10:26.778073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.699 [2024-11-27 08:10:26.778132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.699 [2024-11-27 08:10:26.778146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.699 [2024-11-27 08:10:26.778154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.699 [2024-11-27 08:10:26.778160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.699 [2024-11-27 08:10:26.778177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.699 qpair failed and we were unable to recover it. 00:27:32.699 [2024-11-27 08:10:26.788120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.699 [2024-11-27 08:10:26.788179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.699 [2024-11-27 08:10:26.788193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.699 [2024-11-27 08:10:26.788202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.699 [2024-11-27 08:10:26.788209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.699 [2024-11-27 08:10:26.788228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.699 qpair failed and we were unable to recover it. 00:27:32.699 [2024-11-27 08:10:26.798128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.699 [2024-11-27 08:10:26.798185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.699 [2024-11-27 08:10:26.798200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.699 [2024-11-27 08:10:26.798207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.699 [2024-11-27 08:10:26.798214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.699 [2024-11-27 08:10:26.798230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.699 qpair failed and we were unable to recover it. 00:27:32.958 [2024-11-27 08:10:26.808100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.958 [2024-11-27 08:10:26.808161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.958 [2024-11-27 08:10:26.808174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.958 [2024-11-27 08:10:26.808182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.958 [2024-11-27 08:10:26.808189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.958 [2024-11-27 08:10:26.808204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.958 qpair failed and we were unable to recover it. 00:27:32.958 [2024-11-27 08:10:26.818117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.958 [2024-11-27 08:10:26.818176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.958 [2024-11-27 08:10:26.818190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.958 [2024-11-27 08:10:26.818197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.958 [2024-11-27 08:10:26.818204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.958 [2024-11-27 08:10:26.818218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.958 qpair failed and we were unable to recover it. 00:27:32.958 [2024-11-27 08:10:26.828144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.958 [2024-11-27 08:10:26.828201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.958 [2024-11-27 08:10:26.828215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.958 [2024-11-27 08:10:26.828222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.958 [2024-11-27 08:10:26.828229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.958 [2024-11-27 08:10:26.828244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.958 qpair failed and we were unable to recover it. 00:27:32.958 [2024-11-27 08:10:26.838218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.958 [2024-11-27 08:10:26.838295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.958 [2024-11-27 08:10:26.838310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.958 [2024-11-27 08:10:26.838317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.958 [2024-11-27 08:10:26.838323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.958 [2024-11-27 08:10:26.838339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.958 qpair failed and we were unable to recover it. 00:27:32.958 [2024-11-27 08:10:26.848255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.958 [2024-11-27 08:10:26.848312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.958 [2024-11-27 08:10:26.848326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.958 [2024-11-27 08:10:26.848333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.958 [2024-11-27 08:10:26.848340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.958 [2024-11-27 08:10:26.848355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.958 qpair failed and we were unable to recover it. 00:27:32.958 [2024-11-27 08:10:26.858299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.958 [2024-11-27 08:10:26.858360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.958 [2024-11-27 08:10:26.858373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.958 [2024-11-27 08:10:26.858380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.958 [2024-11-27 08:10:26.858387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.958 [2024-11-27 08:10:26.858402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.958 qpair failed and we were unable to recover it. 00:27:32.958 [2024-11-27 08:10:26.868294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.958 [2024-11-27 08:10:26.868356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.958 [2024-11-27 08:10:26.868370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.958 [2024-11-27 08:10:26.868378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.958 [2024-11-27 08:10:26.868384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.958 [2024-11-27 08:10:26.868399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.958 qpair failed and we were unable to recover it. 00:27:32.958 [2024-11-27 08:10:26.878348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.958 [2024-11-27 08:10:26.878410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.958 [2024-11-27 08:10:26.878428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.958 [2024-11-27 08:10:26.878436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.958 [2024-11-27 08:10:26.878443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.958 [2024-11-27 08:10:26.878458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.958 qpair failed and we were unable to recover it. 00:27:32.958 [2024-11-27 08:10:26.888429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.958 [2024-11-27 08:10:26.888488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.958 [2024-11-27 08:10:26.888503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.958 [2024-11-27 08:10:26.888510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.958 [2024-11-27 08:10:26.888517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.958 [2024-11-27 08:10:26.888533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.958 qpair failed and we were unable to recover it. 00:27:32.958 [2024-11-27 08:10:26.898415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.958 [2024-11-27 08:10:26.898473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.958 [2024-11-27 08:10:26.898487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.958 [2024-11-27 08:10:26.898494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.958 [2024-11-27 08:10:26.898501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.958 [2024-11-27 08:10:26.898516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.958 qpair failed and we were unable to recover it. 00:27:32.959 [2024-11-27 08:10:26.908434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.959 [2024-11-27 08:10:26.908491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.959 [2024-11-27 08:10:26.908505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.959 [2024-11-27 08:10:26.908513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.959 [2024-11-27 08:10:26.908520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.959 [2024-11-27 08:10:26.908534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.959 qpair failed and we were unable to recover it. 00:27:32.959 [2024-11-27 08:10:26.918466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.959 [2024-11-27 08:10:26.918524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.959 [2024-11-27 08:10:26.918539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.959 [2024-11-27 08:10:26.918546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.959 [2024-11-27 08:10:26.918557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.959 [2024-11-27 08:10:26.918572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.959 qpair failed and we were unable to recover it. 00:27:32.959 [2024-11-27 08:10:26.928499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.959 [2024-11-27 08:10:26.928559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.959 [2024-11-27 08:10:26.928572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.959 [2024-11-27 08:10:26.928579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.959 [2024-11-27 08:10:26.928586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.959 [2024-11-27 08:10:26.928602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.959 qpair failed and we were unable to recover it. 00:27:32.959 [2024-11-27 08:10:26.938458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.959 [2024-11-27 08:10:26.938513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.959 [2024-11-27 08:10:26.938527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.959 [2024-11-27 08:10:26.938535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.959 [2024-11-27 08:10:26.938541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.959 [2024-11-27 08:10:26.938556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.959 qpair failed and we were unable to recover it. 00:27:32.959 [2024-11-27 08:10:26.948484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.959 [2024-11-27 08:10:26.948541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.959 [2024-11-27 08:10:26.948555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.959 [2024-11-27 08:10:26.948562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.959 [2024-11-27 08:10:26.948568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.959 [2024-11-27 08:10:26.948583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.959 qpair failed and we were unable to recover it. 00:27:32.959 [2024-11-27 08:10:26.958611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.959 [2024-11-27 08:10:26.958667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.959 [2024-11-27 08:10:26.958681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.959 [2024-11-27 08:10:26.958689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.959 [2024-11-27 08:10:26.958696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.959 [2024-11-27 08:10:26.958710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.959 qpair failed and we were unable to recover it. 00:27:32.959 [2024-11-27 08:10:26.968615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.959 [2024-11-27 08:10:26.968676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.959 [2024-11-27 08:10:26.968690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.959 [2024-11-27 08:10:26.968698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.959 [2024-11-27 08:10:26.968705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.959 [2024-11-27 08:10:26.968720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.959 qpair failed and we were unable to recover it. 00:27:32.959 [2024-11-27 08:10:26.978682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.959 [2024-11-27 08:10:26.978742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.959 [2024-11-27 08:10:26.978756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.959 [2024-11-27 08:10:26.978764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.959 [2024-11-27 08:10:26.978770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.959 [2024-11-27 08:10:26.978786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.959 qpair failed and we were unable to recover it. 00:27:32.959 [2024-11-27 08:10:26.988637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.959 [2024-11-27 08:10:26.988711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.959 [2024-11-27 08:10:26.988725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.959 [2024-11-27 08:10:26.988732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.959 [2024-11-27 08:10:26.988739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.959 [2024-11-27 08:10:26.988755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.959 qpair failed and we were unable to recover it. 00:27:32.959 [2024-11-27 08:10:26.998627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.959 [2024-11-27 08:10:26.998717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.959 [2024-11-27 08:10:26.998731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.959 [2024-11-27 08:10:26.998738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.959 [2024-11-27 08:10:26.998745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.959 [2024-11-27 08:10:26.998761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.959 qpair failed and we were unable to recover it. 00:27:32.959 [2024-11-27 08:10:27.008764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.959 [2024-11-27 08:10:27.008824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.959 [2024-11-27 08:10:27.008844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.959 [2024-11-27 08:10:27.008852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.959 [2024-11-27 08:10:27.008859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.959 [2024-11-27 08:10:27.008875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.959 qpair failed and we were unable to recover it. 00:27:32.959 [2024-11-27 08:10:27.018738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.959 [2024-11-27 08:10:27.018800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.959 [2024-11-27 08:10:27.018814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.959 [2024-11-27 08:10:27.018822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.959 [2024-11-27 08:10:27.018829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.959 [2024-11-27 08:10:27.018844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.959 qpair failed and we were unable to recover it. 00:27:32.959 [2024-11-27 08:10:27.028783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.959 [2024-11-27 08:10:27.028873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.959 [2024-11-27 08:10:27.028887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.959 [2024-11-27 08:10:27.028895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.959 [2024-11-27 08:10:27.028901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.959 [2024-11-27 08:10:27.028916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.959 qpair failed and we were unable to recover it. 00:27:32.959 [2024-11-27 08:10:27.038782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.960 [2024-11-27 08:10:27.038859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.960 [2024-11-27 08:10:27.038873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.960 [2024-11-27 08:10:27.038880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.960 [2024-11-27 08:10:27.038887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.960 [2024-11-27 08:10:27.038902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.960 qpair failed and we were unable to recover it. 00:27:32.960 [2024-11-27 08:10:27.048886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.960 [2024-11-27 08:10:27.048971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.960 [2024-11-27 08:10:27.048986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.960 [2024-11-27 08:10:27.048993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.960 [2024-11-27 08:10:27.049003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.960 [2024-11-27 08:10:27.049019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.960 qpair failed and we were unable to recover it. 00:27:32.960 [2024-11-27 08:10:27.058898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.960 [2024-11-27 08:10:27.058967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.960 [2024-11-27 08:10:27.058982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.960 [2024-11-27 08:10:27.058989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.960 [2024-11-27 08:10:27.058996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:32.960 [2024-11-27 08:10:27.059011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.960 qpair failed and we were unable to recover it. 00:27:33.217 [2024-11-27 08:10:27.068907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.217 [2024-11-27 08:10:27.068999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.217 [2024-11-27 08:10:27.069014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.217 [2024-11-27 08:10:27.069020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.217 [2024-11-27 08:10:27.069027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.217 [2024-11-27 08:10:27.069042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.217 qpair failed and we were unable to recover it. 00:27:33.217 [2024-11-27 08:10:27.078931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.217 [2024-11-27 08:10:27.079000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.217 [2024-11-27 08:10:27.079015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.217 [2024-11-27 08:10:27.079022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.217 [2024-11-27 08:10:27.079029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.217 [2024-11-27 08:10:27.079044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.217 qpair failed and we were unable to recover it. 00:27:33.217 [2024-11-27 08:10:27.088973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.217 [2024-11-27 08:10:27.089031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.217 [2024-11-27 08:10:27.089047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.217 [2024-11-27 08:10:27.089054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.218 [2024-11-27 08:10:27.089061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.218 [2024-11-27 08:10:27.089077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.218 qpair failed and we were unable to recover it. 00:27:33.218 [2024-11-27 08:10:27.099027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.218 [2024-11-27 08:10:27.099083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.218 [2024-11-27 08:10:27.099098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.218 [2024-11-27 08:10:27.099104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.218 [2024-11-27 08:10:27.099111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.218 [2024-11-27 08:10:27.099126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.218 qpair failed and we were unable to recover it. 00:27:33.218 [2024-11-27 08:10:27.109035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.218 [2024-11-27 08:10:27.109092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.218 [2024-11-27 08:10:27.109106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.218 [2024-11-27 08:10:27.109114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.218 [2024-11-27 08:10:27.109121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.218 [2024-11-27 08:10:27.109137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.218 qpair failed and we were unable to recover it. 00:27:33.218 [2024-11-27 08:10:27.119039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.218 [2024-11-27 08:10:27.119096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.218 [2024-11-27 08:10:27.119109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.218 [2024-11-27 08:10:27.119116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.218 [2024-11-27 08:10:27.119123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.218 [2024-11-27 08:10:27.119139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.218 qpair failed and we were unable to recover it. 00:27:33.218 [2024-11-27 08:10:27.129129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.218 [2024-11-27 08:10:27.129211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.218 [2024-11-27 08:10:27.129225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.218 [2024-11-27 08:10:27.129232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.218 [2024-11-27 08:10:27.129238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.218 [2024-11-27 08:10:27.129253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.218 qpair failed and we were unable to recover it. 00:27:33.218 [2024-11-27 08:10:27.139103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.218 [2024-11-27 08:10:27.139160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.218 [2024-11-27 08:10:27.139176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.218 [2024-11-27 08:10:27.139184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.218 [2024-11-27 08:10:27.139190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.218 [2024-11-27 08:10:27.139205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.218 qpair failed and we were unable to recover it. 00:27:33.218 [2024-11-27 08:10:27.149146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.218 [2024-11-27 08:10:27.149211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.218 [2024-11-27 08:10:27.149225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.218 [2024-11-27 08:10:27.149232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.218 [2024-11-27 08:10:27.149239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.218 [2024-11-27 08:10:27.149254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.218 qpair failed and we were unable to recover it. 00:27:33.218 [2024-11-27 08:10:27.159166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.218 [2024-11-27 08:10:27.159228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.218 [2024-11-27 08:10:27.159242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.218 [2024-11-27 08:10:27.159250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.218 [2024-11-27 08:10:27.159256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.218 [2024-11-27 08:10:27.159271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.218 qpair failed and we were unable to recover it. 00:27:33.218 [2024-11-27 08:10:27.169149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.218 [2024-11-27 08:10:27.169211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.218 [2024-11-27 08:10:27.169225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.218 [2024-11-27 08:10:27.169232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.218 [2024-11-27 08:10:27.169239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.218 [2024-11-27 08:10:27.169255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.218 qpair failed and we were unable to recover it. 00:27:33.218 [2024-11-27 08:10:27.179235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.218 [2024-11-27 08:10:27.179294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.218 [2024-11-27 08:10:27.179308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.218 [2024-11-27 08:10:27.179318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.218 [2024-11-27 08:10:27.179324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.218 [2024-11-27 08:10:27.179340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.218 qpair failed and we were unable to recover it. 00:27:33.218 [2024-11-27 08:10:27.189258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.218 [2024-11-27 08:10:27.189318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.218 [2024-11-27 08:10:27.189332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.218 [2024-11-27 08:10:27.189339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.218 [2024-11-27 08:10:27.189346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.218 [2024-11-27 08:10:27.189361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.218 qpair failed and we were unable to recover it. 00:27:33.218 [2024-11-27 08:10:27.199316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.218 [2024-11-27 08:10:27.199378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.218 [2024-11-27 08:10:27.199392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.218 [2024-11-27 08:10:27.199400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.218 [2024-11-27 08:10:27.199406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.218 [2024-11-27 08:10:27.199422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.218 qpair failed and we were unable to recover it. 00:27:33.218 [2024-11-27 08:10:27.209316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.218 [2024-11-27 08:10:27.209376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.218 [2024-11-27 08:10:27.209390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.218 [2024-11-27 08:10:27.209398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.218 [2024-11-27 08:10:27.209405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.218 [2024-11-27 08:10:27.209420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.218 qpair failed and we were unable to recover it. 00:27:33.218 [2024-11-27 08:10:27.219282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.218 [2024-11-27 08:10:27.219340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.218 [2024-11-27 08:10:27.219354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.218 [2024-11-27 08:10:27.219362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.218 [2024-11-27 08:10:27.219369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.218 [2024-11-27 08:10:27.219384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.218 qpair failed and we were unable to recover it. 00:27:33.218 [2024-11-27 08:10:27.229361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.218 [2024-11-27 08:10:27.229419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.218 [2024-11-27 08:10:27.229433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.218 [2024-11-27 08:10:27.229441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.218 [2024-11-27 08:10:27.229447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.218 [2024-11-27 08:10:27.229463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.218 qpair failed and we were unable to recover it. 00:27:33.218 [2024-11-27 08:10:27.239389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.218 [2024-11-27 08:10:27.239447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.219 [2024-11-27 08:10:27.239462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.219 [2024-11-27 08:10:27.239469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.219 [2024-11-27 08:10:27.239475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.219 [2024-11-27 08:10:27.239491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.219 qpair failed and we were unable to recover it. 00:27:33.219 [2024-11-27 08:10:27.249436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.219 [2024-11-27 08:10:27.249493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.219 [2024-11-27 08:10:27.249507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.219 [2024-11-27 08:10:27.249514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.219 [2024-11-27 08:10:27.249521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.219 [2024-11-27 08:10:27.249537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.219 qpair failed and we were unable to recover it. 00:27:33.219 [2024-11-27 08:10:27.259465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.219 [2024-11-27 08:10:27.259526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.219 [2024-11-27 08:10:27.259540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.219 [2024-11-27 08:10:27.259547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.219 [2024-11-27 08:10:27.259554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.219 [2024-11-27 08:10:27.259570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.219 qpair failed and we were unable to recover it. 00:27:33.219 [2024-11-27 08:10:27.269489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.219 [2024-11-27 08:10:27.269555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.219 [2024-11-27 08:10:27.269570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.219 [2024-11-27 08:10:27.269577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.219 [2024-11-27 08:10:27.269584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.219 [2024-11-27 08:10:27.269598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.219 qpair failed and we were unable to recover it. 00:27:33.219 [2024-11-27 08:10:27.279505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.219 [2024-11-27 08:10:27.279564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.219 [2024-11-27 08:10:27.279579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.219 [2024-11-27 08:10:27.279588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.219 [2024-11-27 08:10:27.279594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.219 [2024-11-27 08:10:27.279610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.219 qpair failed and we were unable to recover it. 00:27:33.219 [2024-11-27 08:10:27.289553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.219 [2024-11-27 08:10:27.289616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.219 [2024-11-27 08:10:27.289631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.219 [2024-11-27 08:10:27.289638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.219 [2024-11-27 08:10:27.289645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.219 [2024-11-27 08:10:27.289660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.219 qpair failed and we were unable to recover it. 00:27:33.219 [2024-11-27 08:10:27.299577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.219 [2024-11-27 08:10:27.299637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.219 [2024-11-27 08:10:27.299653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.219 [2024-11-27 08:10:27.299661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.219 [2024-11-27 08:10:27.299668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.219 [2024-11-27 08:10:27.299683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.219 qpair failed and we were unable to recover it. 00:27:33.219 [2024-11-27 08:10:27.309599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.219 [2024-11-27 08:10:27.309663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.219 [2024-11-27 08:10:27.309678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.219 [2024-11-27 08:10:27.309688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.219 [2024-11-27 08:10:27.309694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.219 [2024-11-27 08:10:27.309709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.219 qpair failed and we were unable to recover it. 00:27:33.219 [2024-11-27 08:10:27.319627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.219 [2024-11-27 08:10:27.319682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.219 [2024-11-27 08:10:27.319697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.219 [2024-11-27 08:10:27.319704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.219 [2024-11-27 08:10:27.319711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.219 [2024-11-27 08:10:27.319726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.219 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-27 08:10:27.329673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.477 [2024-11-27 08:10:27.329734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.477 [2024-11-27 08:10:27.329748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.477 [2024-11-27 08:10:27.329755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.477 [2024-11-27 08:10:27.329762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.477 [2024-11-27 08:10:27.329776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-27 08:10:27.339751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.477 [2024-11-27 08:10:27.339851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.477 [2024-11-27 08:10:27.339865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.477 [2024-11-27 08:10:27.339872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.477 [2024-11-27 08:10:27.339879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.477 [2024-11-27 08:10:27.339894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.477 qpair failed and we were unable to recover it. 00:27:33.477 [2024-11-27 08:10:27.349708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.478 [2024-11-27 08:10:27.349764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.478 [2024-11-27 08:10:27.349778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.478 [2024-11-27 08:10:27.349784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.478 [2024-11-27 08:10:27.349792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.478 [2024-11-27 08:10:27.349812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-27 08:10:27.359741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.478 [2024-11-27 08:10:27.359801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.478 [2024-11-27 08:10:27.359817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.478 [2024-11-27 08:10:27.359826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.478 [2024-11-27 08:10:27.359832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.478 [2024-11-27 08:10:27.359848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-27 08:10:27.369785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.478 [2024-11-27 08:10:27.369845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.478 [2024-11-27 08:10:27.369859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.478 [2024-11-27 08:10:27.369866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.478 [2024-11-27 08:10:27.369873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.478 [2024-11-27 08:10:27.369888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-27 08:10:27.379813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.478 [2024-11-27 08:10:27.379874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.478 [2024-11-27 08:10:27.379888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.478 [2024-11-27 08:10:27.379895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.478 [2024-11-27 08:10:27.379902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.478 [2024-11-27 08:10:27.379917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-27 08:10:27.389837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.478 [2024-11-27 08:10:27.389892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.478 [2024-11-27 08:10:27.389906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.478 [2024-11-27 08:10:27.389913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.478 [2024-11-27 08:10:27.389920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.478 [2024-11-27 08:10:27.389935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-27 08:10:27.399856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.478 [2024-11-27 08:10:27.399915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.478 [2024-11-27 08:10:27.399929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.478 [2024-11-27 08:10:27.399937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.478 [2024-11-27 08:10:27.399943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.478 [2024-11-27 08:10:27.399964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-27 08:10:27.409924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.478 [2024-11-27 08:10:27.409991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.478 [2024-11-27 08:10:27.410006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.478 [2024-11-27 08:10:27.410014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.478 [2024-11-27 08:10:27.410020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.478 [2024-11-27 08:10:27.410036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-27 08:10:27.419971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.478 [2024-11-27 08:10:27.420027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.478 [2024-11-27 08:10:27.420041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.478 [2024-11-27 08:10:27.420048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.478 [2024-11-27 08:10:27.420055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.478 [2024-11-27 08:10:27.420070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-27 08:10:27.429878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.478 [2024-11-27 08:10:27.429937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.478 [2024-11-27 08:10:27.429955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.478 [2024-11-27 08:10:27.429962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.478 [2024-11-27 08:10:27.429969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.478 [2024-11-27 08:10:27.429984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-27 08:10:27.439977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.478 [2024-11-27 08:10:27.440035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.478 [2024-11-27 08:10:27.440053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.478 [2024-11-27 08:10:27.440061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.478 [2024-11-27 08:10:27.440067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.478 [2024-11-27 08:10:27.440082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-27 08:10:27.450016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.478 [2024-11-27 08:10:27.450086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.478 [2024-11-27 08:10:27.450100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.478 [2024-11-27 08:10:27.450107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.478 [2024-11-27 08:10:27.450114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.478 [2024-11-27 08:10:27.450129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-27 08:10:27.460035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.478 [2024-11-27 08:10:27.460093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.478 [2024-11-27 08:10:27.460106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.478 [2024-11-27 08:10:27.460113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.478 [2024-11-27 08:10:27.460121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.478 [2024-11-27 08:10:27.460136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.478 qpair failed and we were unable to recover it. 00:27:33.478 [2024-11-27 08:10:27.470043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.479 [2024-11-27 08:10:27.470100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.479 [2024-11-27 08:10:27.470113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.479 [2024-11-27 08:10:27.470121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.479 [2024-11-27 08:10:27.470127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.479 [2024-11-27 08:10:27.470142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-27 08:10:27.480082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.479 [2024-11-27 08:10:27.480140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.479 [2024-11-27 08:10:27.480154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.479 [2024-11-27 08:10:27.480162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.479 [2024-11-27 08:10:27.480172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.479 [2024-11-27 08:10:27.480188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-27 08:10:27.490169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.479 [2024-11-27 08:10:27.490236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.479 [2024-11-27 08:10:27.490250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.479 [2024-11-27 08:10:27.490258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.479 [2024-11-27 08:10:27.490264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.479 [2024-11-27 08:10:27.490280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-27 08:10:27.500146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.479 [2024-11-27 08:10:27.500208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.479 [2024-11-27 08:10:27.500222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.479 [2024-11-27 08:10:27.500230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.479 [2024-11-27 08:10:27.500236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.479 [2024-11-27 08:10:27.500252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-27 08:10:27.510168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.479 [2024-11-27 08:10:27.510269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.479 [2024-11-27 08:10:27.510284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.479 [2024-11-27 08:10:27.510291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.479 [2024-11-27 08:10:27.510298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.479 [2024-11-27 08:10:27.510314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-27 08:10:27.520196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.479 [2024-11-27 08:10:27.520254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.479 [2024-11-27 08:10:27.520268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.479 [2024-11-27 08:10:27.520276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.479 [2024-11-27 08:10:27.520282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.479 [2024-11-27 08:10:27.520298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-27 08:10:27.530265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.479 [2024-11-27 08:10:27.530325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.479 [2024-11-27 08:10:27.530339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.479 [2024-11-27 08:10:27.530346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.479 [2024-11-27 08:10:27.530353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.479 [2024-11-27 08:10:27.530369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-27 08:10:27.540196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.479 [2024-11-27 08:10:27.540249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.479 [2024-11-27 08:10:27.540263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.479 [2024-11-27 08:10:27.540270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.479 [2024-11-27 08:10:27.540277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.479 [2024-11-27 08:10:27.540293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-27 08:10:27.550273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.479 [2024-11-27 08:10:27.550330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.479 [2024-11-27 08:10:27.550345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.479 [2024-11-27 08:10:27.550352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.479 [2024-11-27 08:10:27.550359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.479 [2024-11-27 08:10:27.550374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-27 08:10:27.560375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.479 [2024-11-27 08:10:27.560455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.479 [2024-11-27 08:10:27.560470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.479 [2024-11-27 08:10:27.560478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.479 [2024-11-27 08:10:27.560484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.479 [2024-11-27 08:10:27.560499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-27 08:10:27.570353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.479 [2024-11-27 08:10:27.570413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.479 [2024-11-27 08:10:27.570431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.479 [2024-11-27 08:10:27.570438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.479 [2024-11-27 08:10:27.570445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.479 [2024-11-27 08:10:27.570461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.479 qpair failed and we were unable to recover it. 00:27:33.479 [2024-11-27 08:10:27.580421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.480 [2024-11-27 08:10:27.580482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.480 [2024-11-27 08:10:27.580496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.480 [2024-11-27 08:10:27.580505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.480 [2024-11-27 08:10:27.580511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.480 [2024-11-27 08:10:27.580526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.480 qpair failed and we were unable to recover it. 00:27:33.738 [2024-11-27 08:10:27.590390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.738 [2024-11-27 08:10:27.590443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.738 [2024-11-27 08:10:27.590458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.738 [2024-11-27 08:10:27.590464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.738 [2024-11-27 08:10:27.590471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.738 [2024-11-27 08:10:27.590486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.738 qpair failed and we were unable to recover it. 00:27:33.738 [2024-11-27 08:10:27.600437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.738 [2024-11-27 08:10:27.600509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.738 [2024-11-27 08:10:27.600523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.738 [2024-11-27 08:10:27.600530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.738 [2024-11-27 08:10:27.600536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.738 [2024-11-27 08:10:27.600551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.738 qpair failed and we were unable to recover it. 00:27:33.738 [2024-11-27 08:10:27.610486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.738 [2024-11-27 08:10:27.610549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.738 [2024-11-27 08:10:27.610564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.738 [2024-11-27 08:10:27.610571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.738 [2024-11-27 08:10:27.610581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.738 [2024-11-27 08:10:27.610596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.738 qpair failed and we were unable to recover it. 00:27:33.738 [2024-11-27 08:10:27.620499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.738 [2024-11-27 08:10:27.620556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.738 [2024-11-27 08:10:27.620570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.738 [2024-11-27 08:10:27.620577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.738 [2024-11-27 08:10:27.620584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.738 [2024-11-27 08:10:27.620600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.738 qpair failed and we were unable to recover it. 00:27:33.738 [2024-11-27 08:10:27.630460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.738 [2024-11-27 08:10:27.630513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.738 [2024-11-27 08:10:27.630527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.738 [2024-11-27 08:10:27.630534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.739 [2024-11-27 08:10:27.630542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.739 [2024-11-27 08:10:27.630556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.739 qpair failed and we were unable to recover it. 00:27:33.739 [2024-11-27 08:10:27.640539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.739 [2024-11-27 08:10:27.640592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.739 [2024-11-27 08:10:27.640606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.739 [2024-11-27 08:10:27.640613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.739 [2024-11-27 08:10:27.640620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.739 [2024-11-27 08:10:27.640635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.739 qpair failed and we were unable to recover it. 00:27:33.739 [2024-11-27 08:10:27.650578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.739 [2024-11-27 08:10:27.650636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.739 [2024-11-27 08:10:27.650650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.739 [2024-11-27 08:10:27.650657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.739 [2024-11-27 08:10:27.650664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.739 [2024-11-27 08:10:27.650679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.739 qpair failed and we were unable to recover it. 00:27:33.739 [2024-11-27 08:10:27.660521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.739 [2024-11-27 08:10:27.660629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.739 [2024-11-27 08:10:27.660643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.739 [2024-11-27 08:10:27.660651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.739 [2024-11-27 08:10:27.660658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.739 [2024-11-27 08:10:27.660674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.739 qpair failed and we were unable to recover it. 00:27:33.739 [2024-11-27 08:10:27.670563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.739 [2024-11-27 08:10:27.670618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.739 [2024-11-27 08:10:27.670632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.739 [2024-11-27 08:10:27.670639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.739 [2024-11-27 08:10:27.670646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.739 [2024-11-27 08:10:27.670662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.739 qpair failed and we were unable to recover it. 00:27:33.739 [2024-11-27 08:10:27.680643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.739 [2024-11-27 08:10:27.680735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.739 [2024-11-27 08:10:27.680749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.739 [2024-11-27 08:10:27.680756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.739 [2024-11-27 08:10:27.680762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.739 [2024-11-27 08:10:27.680779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.739 qpair failed and we were unable to recover it. 00:27:33.739 [2024-11-27 08:10:27.690694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.739 [2024-11-27 08:10:27.690767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.739 [2024-11-27 08:10:27.690782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.739 [2024-11-27 08:10:27.690789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.739 [2024-11-27 08:10:27.690795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.739 [2024-11-27 08:10:27.690811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.739 qpair failed and we were unable to recover it. 00:27:33.739 [2024-11-27 08:10:27.700709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.739 [2024-11-27 08:10:27.700770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.739 [2024-11-27 08:10:27.700792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.739 [2024-11-27 08:10:27.700800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.739 [2024-11-27 08:10:27.700807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.739 [2024-11-27 08:10:27.700823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.739 qpair failed and we were unable to recover it. 00:27:33.739 [2024-11-27 08:10:27.710778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.739 [2024-11-27 08:10:27.710833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.739 [2024-11-27 08:10:27.710848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.739 [2024-11-27 08:10:27.710855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.739 [2024-11-27 08:10:27.710862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.739 [2024-11-27 08:10:27.710878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.739 qpair failed and we were unable to recover it. 00:27:33.739 [2024-11-27 08:10:27.720790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.739 [2024-11-27 08:10:27.720891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.739 [2024-11-27 08:10:27.720908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.739 [2024-11-27 08:10:27.720915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.739 [2024-11-27 08:10:27.720923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.739 [2024-11-27 08:10:27.720938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.739 qpair failed and we were unable to recover it. 00:27:33.739 [2024-11-27 08:10:27.730791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.739 [2024-11-27 08:10:27.730853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.739 [2024-11-27 08:10:27.730869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.739 [2024-11-27 08:10:27.730876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.739 [2024-11-27 08:10:27.730884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.739 [2024-11-27 08:10:27.730899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.739 qpair failed and we were unable to recover it. 00:27:33.739 [2024-11-27 08:10:27.740859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.739 [2024-11-27 08:10:27.740919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.739 [2024-11-27 08:10:27.740933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.739 [2024-11-27 08:10:27.740944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.739 [2024-11-27 08:10:27.740954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.739 [2024-11-27 08:10:27.740970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.739 qpair failed and we were unable to recover it. 00:27:33.739 [2024-11-27 08:10:27.750908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.739 [2024-11-27 08:10:27.750991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.739 [2024-11-27 08:10:27.751005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.739 [2024-11-27 08:10:27.751012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.739 [2024-11-27 08:10:27.751018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.739 [2024-11-27 08:10:27.751034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.739 qpair failed and we were unable to recover it. 00:27:33.739 [2024-11-27 08:10:27.760862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.739 [2024-11-27 08:10:27.760923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.739 [2024-11-27 08:10:27.760937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.739 [2024-11-27 08:10:27.760945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.739 [2024-11-27 08:10:27.760956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.740 [2024-11-27 08:10:27.760971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.740 qpair failed and we were unable to recover it. 00:27:33.740 [2024-11-27 08:10:27.770893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.740 [2024-11-27 08:10:27.770959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.740 [2024-11-27 08:10:27.770974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.740 [2024-11-27 08:10:27.770981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.740 [2024-11-27 08:10:27.770987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.740 [2024-11-27 08:10:27.771003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.740 qpair failed and we were unable to recover it. 00:27:33.740 [2024-11-27 08:10:27.780923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.740 [2024-11-27 08:10:27.780994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.740 [2024-11-27 08:10:27.781010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.740 [2024-11-27 08:10:27.781017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.740 [2024-11-27 08:10:27.781024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.740 [2024-11-27 08:10:27.781041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.740 qpair failed and we were unable to recover it. 00:27:33.740 [2024-11-27 08:10:27.790940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.740 [2024-11-27 08:10:27.791004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.740 [2024-11-27 08:10:27.791018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.740 [2024-11-27 08:10:27.791026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.740 [2024-11-27 08:10:27.791032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.740 [2024-11-27 08:10:27.791047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.740 qpair failed and we were unable to recover it. 00:27:33.740 [2024-11-27 08:10:27.800945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.740 [2024-11-27 08:10:27.801016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.740 [2024-11-27 08:10:27.801031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.740 [2024-11-27 08:10:27.801038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.740 [2024-11-27 08:10:27.801044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.740 [2024-11-27 08:10:27.801060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.740 qpair failed and we were unable to recover it. 00:27:33.740 [2024-11-27 08:10:27.811013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.740 [2024-11-27 08:10:27.811072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.740 [2024-11-27 08:10:27.811086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.740 [2024-11-27 08:10:27.811093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.740 [2024-11-27 08:10:27.811100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.740 [2024-11-27 08:10:27.811115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.740 qpair failed and we were unable to recover it. 00:27:33.740 [2024-11-27 08:10:27.821033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.740 [2024-11-27 08:10:27.821091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.740 [2024-11-27 08:10:27.821105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.740 [2024-11-27 08:10:27.821112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.740 [2024-11-27 08:10:27.821119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.740 [2024-11-27 08:10:27.821134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.740 qpair failed and we were unable to recover it. 00:27:33.740 [2024-11-27 08:10:27.831063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.740 [2024-11-27 08:10:27.831128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.740 [2024-11-27 08:10:27.831142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.740 [2024-11-27 08:10:27.831150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.740 [2024-11-27 08:10:27.831157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.740 [2024-11-27 08:10:27.831172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.740 qpair failed and we were unable to recover it. 00:27:33.740 [2024-11-27 08:10:27.841057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.740 [2024-11-27 08:10:27.841109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.740 [2024-11-27 08:10:27.841123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.740 [2024-11-27 08:10:27.841130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.740 [2024-11-27 08:10:27.841137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.740 [2024-11-27 08:10:27.841153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.740 qpair failed and we were unable to recover it. 00:27:33.999 [2024-11-27 08:10:27.851175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.999 [2024-11-27 08:10:27.851262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.999 [2024-11-27 08:10:27.851277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.999 [2024-11-27 08:10:27.851284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.999 [2024-11-27 08:10:27.851290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.999 [2024-11-27 08:10:27.851306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.999 qpair failed and we were unable to recover it. 00:27:33.999 [2024-11-27 08:10:27.861114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.999 [2024-11-27 08:10:27.861208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.999 [2024-11-27 08:10:27.861222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.999 [2024-11-27 08:10:27.861229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.999 [2024-11-27 08:10:27.861236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.999 [2024-11-27 08:10:27.861251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.999 qpair failed and we were unable to recover it. 00:27:33.999 [2024-11-27 08:10:27.871196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.999 [2024-11-27 08:10:27.871252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.999 [2024-11-27 08:10:27.871266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.999 [2024-11-27 08:10:27.871277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.999 [2024-11-27 08:10:27.871283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.999 [2024-11-27 08:10:27.871299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.999 qpair failed and we were unable to recover it. 00:27:33.999 [2024-11-27 08:10:27.881142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.999 [2024-11-27 08:10:27.881207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.999 [2024-11-27 08:10:27.881221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.999 [2024-11-27 08:10:27.881229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.999 [2024-11-27 08:10:27.881235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.999 [2024-11-27 08:10:27.881250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.999 qpair failed and we were unable to recover it. 00:27:33.999 [2024-11-27 08:10:27.891257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.999 [2024-11-27 08:10:27.891315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.999 [2024-11-27 08:10:27.891329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.999 [2024-11-27 08:10:27.891336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.999 [2024-11-27 08:10:27.891342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.999 [2024-11-27 08:10:27.891358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.999 qpair failed and we were unable to recover it. 00:27:33.999 [2024-11-27 08:10:27.901400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.999 [2024-11-27 08:10:27.901465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.999 [2024-11-27 08:10:27.901478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.999 [2024-11-27 08:10:27.901485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.999 [2024-11-27 08:10:27.901492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:33.999 [2024-11-27 08:10:27.901507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.000 qpair failed and we were unable to recover it. 00:27:34.000 [2024-11-27 08:10:27.911346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.000 [2024-11-27 08:10:27.911405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.000 [2024-11-27 08:10:27.911419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.000 [2024-11-27 08:10:27.911426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.000 [2024-11-27 08:10:27.911433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.000 [2024-11-27 08:10:27.911451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.000 qpair failed and we were unable to recover it. 00:27:34.000 [2024-11-27 08:10:27.921414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.000 [2024-11-27 08:10:27.921471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.000 [2024-11-27 08:10:27.921485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.000 [2024-11-27 08:10:27.921493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.000 [2024-11-27 08:10:27.921499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.000 [2024-11-27 08:10:27.921514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.000 qpair failed and we were unable to recover it. 00:27:34.000 [2024-11-27 08:10:27.931375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.000 [2024-11-27 08:10:27.931468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.000 [2024-11-27 08:10:27.931483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.000 [2024-11-27 08:10:27.931490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.000 [2024-11-27 08:10:27.931496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.000 [2024-11-27 08:10:27.931513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.000 qpair failed and we were unable to recover it. 00:27:34.000 [2024-11-27 08:10:27.941361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.000 [2024-11-27 08:10:27.941430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.000 [2024-11-27 08:10:27.941444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.000 [2024-11-27 08:10:27.941451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.000 [2024-11-27 08:10:27.941458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.000 [2024-11-27 08:10:27.941473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.000 qpair failed and we were unable to recover it. 00:27:34.000 [2024-11-27 08:10:27.951425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.000 [2024-11-27 08:10:27.951481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.000 [2024-11-27 08:10:27.951496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.000 [2024-11-27 08:10:27.951504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.000 [2024-11-27 08:10:27.951510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.000 [2024-11-27 08:10:27.951526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.000 qpair failed and we were unable to recover it. 00:27:34.000 [2024-11-27 08:10:27.961446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.000 [2024-11-27 08:10:27.961502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.000 [2024-11-27 08:10:27.961516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.000 [2024-11-27 08:10:27.961523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.000 [2024-11-27 08:10:27.961530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.000 [2024-11-27 08:10:27.961545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.000 qpair failed and we were unable to recover it. 00:27:34.000 [2024-11-27 08:10:27.971486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.000 [2024-11-27 08:10:27.971546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.000 [2024-11-27 08:10:27.971560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.000 [2024-11-27 08:10:27.971568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.000 [2024-11-27 08:10:27.971574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.000 [2024-11-27 08:10:27.971589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.000 qpair failed and we were unable to recover it. 00:27:34.000 [2024-11-27 08:10:27.981497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.000 [2024-11-27 08:10:27.981551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.000 [2024-11-27 08:10:27.981566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.000 [2024-11-27 08:10:27.981573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.000 [2024-11-27 08:10:27.981579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.000 [2024-11-27 08:10:27.981595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.000 qpair failed and we were unable to recover it. 00:27:34.000 [2024-11-27 08:10:27.991464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.000 [2024-11-27 08:10:27.991531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.000 [2024-11-27 08:10:27.991546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.000 [2024-11-27 08:10:27.991554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.000 [2024-11-27 08:10:27.991560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.000 [2024-11-27 08:10:27.991575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.000 qpair failed and we were unable to recover it. 00:27:34.000 [2024-11-27 08:10:28.001562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.000 [2024-11-27 08:10:28.001619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.000 [2024-11-27 08:10:28.001636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.000 [2024-11-27 08:10:28.001644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.000 [2024-11-27 08:10:28.001650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.000 [2024-11-27 08:10:28.001666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.000 qpair failed and we were unable to recover it. 00:27:34.000 [2024-11-27 08:10:28.011580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.000 [2024-11-27 08:10:28.011639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.000 [2024-11-27 08:10:28.011654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.000 [2024-11-27 08:10:28.011661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.000 [2024-11-27 08:10:28.011668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.000 [2024-11-27 08:10:28.011684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.000 qpair failed and we were unable to recover it. 00:27:34.000 [2024-11-27 08:10:28.021617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.000 [2024-11-27 08:10:28.021671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.000 [2024-11-27 08:10:28.021685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.000 [2024-11-27 08:10:28.021692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.000 [2024-11-27 08:10:28.021698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.000 [2024-11-27 08:10:28.021714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.000 qpair failed and we were unable to recover it. 00:27:34.000 [2024-11-27 08:10:28.031645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.000 [2024-11-27 08:10:28.031701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.000 [2024-11-27 08:10:28.031717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.000 [2024-11-27 08:10:28.031725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.000 [2024-11-27 08:10:28.031732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.000 [2024-11-27 08:10:28.031746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.000 qpair failed and we were unable to recover it. 00:27:34.001 [2024-11-27 08:10:28.041670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.001 [2024-11-27 08:10:28.041721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.001 [2024-11-27 08:10:28.041735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.001 [2024-11-27 08:10:28.041742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.001 [2024-11-27 08:10:28.041752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.001 [2024-11-27 08:10:28.041768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.001 qpair failed and we were unable to recover it. 00:27:34.001 [2024-11-27 08:10:28.051712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.001 [2024-11-27 08:10:28.051768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.001 [2024-11-27 08:10:28.051782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.001 [2024-11-27 08:10:28.051789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.001 [2024-11-27 08:10:28.051796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.001 [2024-11-27 08:10:28.051811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.001 qpair failed and we were unable to recover it. 00:27:34.001 [2024-11-27 08:10:28.061745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.001 [2024-11-27 08:10:28.061800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.001 [2024-11-27 08:10:28.061814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.001 [2024-11-27 08:10:28.061821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.001 [2024-11-27 08:10:28.061828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.001 [2024-11-27 08:10:28.061843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.001 qpair failed and we were unable to recover it. 00:27:34.001 [2024-11-27 08:10:28.071768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.001 [2024-11-27 08:10:28.071826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.001 [2024-11-27 08:10:28.071849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.001 [2024-11-27 08:10:28.071856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.001 [2024-11-27 08:10:28.071863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.001 [2024-11-27 08:10:28.071884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.001 qpair failed and we were unable to recover it. 00:27:34.001 [2024-11-27 08:10:28.081812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.001 [2024-11-27 08:10:28.081866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.001 [2024-11-27 08:10:28.081881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.001 [2024-11-27 08:10:28.081888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.001 [2024-11-27 08:10:28.081895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.001 [2024-11-27 08:10:28.081911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.001 qpair failed and we were unable to recover it. 00:27:34.001 [2024-11-27 08:10:28.091754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.001 [2024-11-27 08:10:28.091816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.001 [2024-11-27 08:10:28.091830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.001 [2024-11-27 08:10:28.091839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.001 [2024-11-27 08:10:28.091845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.001 [2024-11-27 08:10:28.091860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.001 qpair failed and we were unable to recover it. 00:27:34.001 [2024-11-27 08:10:28.101888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.001 [2024-11-27 08:10:28.101950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.001 [2024-11-27 08:10:28.101965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.001 [2024-11-27 08:10:28.101973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.001 [2024-11-27 08:10:28.101980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.001 [2024-11-27 08:10:28.101995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.001 qpair failed and we were unable to recover it. 00:27:34.259 [2024-11-27 08:10:28.111882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.259 [2024-11-27 08:10:28.111945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.259 [2024-11-27 08:10:28.111964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.259 [2024-11-27 08:10:28.111971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.259 [2024-11-27 08:10:28.111978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.259 [2024-11-27 08:10:28.111994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.259 qpair failed and we were unable to recover it. 00:27:34.259 [2024-11-27 08:10:28.121908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.259 [2024-11-27 08:10:28.121970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.259 [2024-11-27 08:10:28.121985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.259 [2024-11-27 08:10:28.121993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.260 [2024-11-27 08:10:28.122000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.260 [2024-11-27 08:10:28.122015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.260 qpair failed and we were unable to recover it. 00:27:34.260 [2024-11-27 08:10:28.131958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.260 [2024-11-27 08:10:28.132031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.260 [2024-11-27 08:10:28.132048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.260 [2024-11-27 08:10:28.132055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.260 [2024-11-27 08:10:28.132062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.260 [2024-11-27 08:10:28.132077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.260 qpair failed and we were unable to recover it. 00:27:34.260 [2024-11-27 08:10:28.141983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.260 [2024-11-27 08:10:28.142045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.260 [2024-11-27 08:10:28.142059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.260 [2024-11-27 08:10:28.142067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.260 [2024-11-27 08:10:28.142074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.260 [2024-11-27 08:10:28.142089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.260 qpair failed and we were unable to recover it. 00:27:34.260 [2024-11-27 08:10:28.152002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.260 [2024-11-27 08:10:28.152055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.260 [2024-11-27 08:10:28.152069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.260 [2024-11-27 08:10:28.152076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.260 [2024-11-27 08:10:28.152083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.260 [2024-11-27 08:10:28.152098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.260 qpair failed and we were unable to recover it. 00:27:34.260 [2024-11-27 08:10:28.162041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.260 [2024-11-27 08:10:28.162099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.260 [2024-11-27 08:10:28.162113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.260 [2024-11-27 08:10:28.162121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.260 [2024-11-27 08:10:28.162127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.260 [2024-11-27 08:10:28.162143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.260 qpair failed and we were unable to recover it. 00:27:34.260 [2024-11-27 08:10:28.172019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.260 [2024-11-27 08:10:28.172082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.260 [2024-11-27 08:10:28.172096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.260 [2024-11-27 08:10:28.172103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.260 [2024-11-27 08:10:28.172113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.260 [2024-11-27 08:10:28.172129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.260 qpair failed and we were unable to recover it. 00:27:34.260 [2024-11-27 08:10:28.182046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.260 [2024-11-27 08:10:28.182104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.260 [2024-11-27 08:10:28.182119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.260 [2024-11-27 08:10:28.182126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.260 [2024-11-27 08:10:28.182132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.260 [2024-11-27 08:10:28.182147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.260 qpair failed and we were unable to recover it. 00:27:34.260 [2024-11-27 08:10:28.192051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.260 [2024-11-27 08:10:28.192111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.260 [2024-11-27 08:10:28.192124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.260 [2024-11-27 08:10:28.192132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.260 [2024-11-27 08:10:28.192138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.260 [2024-11-27 08:10:28.192153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.260 qpair failed and we were unable to recover it. 00:27:34.260 [2024-11-27 08:10:28.202139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.260 [2024-11-27 08:10:28.202199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.260 [2024-11-27 08:10:28.202213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.260 [2024-11-27 08:10:28.202221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.260 [2024-11-27 08:10:28.202228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.260 [2024-11-27 08:10:28.202243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.260 qpair failed and we were unable to recover it. 00:27:34.260 [2024-11-27 08:10:28.212125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.260 [2024-11-27 08:10:28.212183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.260 [2024-11-27 08:10:28.212198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.260 [2024-11-27 08:10:28.212206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.260 [2024-11-27 08:10:28.212213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.260 [2024-11-27 08:10:28.212228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.260 qpair failed and we were unable to recover it. 00:27:34.260 [2024-11-27 08:10:28.222130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.260 [2024-11-27 08:10:28.222192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.260 [2024-11-27 08:10:28.222207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.260 [2024-11-27 08:10:28.222216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.260 [2024-11-27 08:10:28.222223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.260 [2024-11-27 08:10:28.222238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.260 qpair failed and we were unable to recover it. 00:27:34.260 [2024-11-27 08:10:28.232232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.260 [2024-11-27 08:10:28.232290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.260 [2024-11-27 08:10:28.232304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.260 [2024-11-27 08:10:28.232312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.260 [2024-11-27 08:10:28.232318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.260 [2024-11-27 08:10:28.232333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.260 qpair failed and we were unable to recover it. 00:27:34.260 [2024-11-27 08:10:28.242286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.260 [2024-11-27 08:10:28.242345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.260 [2024-11-27 08:10:28.242359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.260 [2024-11-27 08:10:28.242366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.260 [2024-11-27 08:10:28.242373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.260 [2024-11-27 08:10:28.242388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.260 qpair failed and we were unable to recover it. 00:27:34.260 [2024-11-27 08:10:28.252287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.260 [2024-11-27 08:10:28.252348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.261 [2024-11-27 08:10:28.252362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.261 [2024-11-27 08:10:28.252369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.261 [2024-11-27 08:10:28.252376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.261 [2024-11-27 08:10:28.252392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.261 qpair failed and we were unable to recover it. 00:27:34.261 [2024-11-27 08:10:28.262291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.261 [2024-11-27 08:10:28.262383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.261 [2024-11-27 08:10:28.262399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.261 [2024-11-27 08:10:28.262406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.261 [2024-11-27 08:10:28.262413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.261 [2024-11-27 08:10:28.262429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.261 qpair failed and we were unable to recover it. 00:27:34.261 [2024-11-27 08:10:28.272347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.261 [2024-11-27 08:10:28.272411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.261 [2024-11-27 08:10:28.272426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.261 [2024-11-27 08:10:28.272433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.261 [2024-11-27 08:10:28.272440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.261 [2024-11-27 08:10:28.272455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.261 qpair failed and we were unable to recover it. 00:27:34.261 [2024-11-27 08:10:28.282361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.261 [2024-11-27 08:10:28.282422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.261 [2024-11-27 08:10:28.282436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.261 [2024-11-27 08:10:28.282444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.261 [2024-11-27 08:10:28.282450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.261 [2024-11-27 08:10:28.282466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.261 qpair failed and we were unable to recover it. 00:27:34.261 [2024-11-27 08:10:28.292354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.261 [2024-11-27 08:10:28.292429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.261 [2024-11-27 08:10:28.292444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.261 [2024-11-27 08:10:28.292451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.261 [2024-11-27 08:10:28.292457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.261 [2024-11-27 08:10:28.292471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.261 qpair failed and we were unable to recover it. 00:27:34.261 [2024-11-27 08:10:28.302400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.261 [2024-11-27 08:10:28.302462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.261 [2024-11-27 08:10:28.302476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.261 [2024-11-27 08:10:28.302487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.261 [2024-11-27 08:10:28.302493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.261 [2024-11-27 08:10:28.302508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.261 qpair failed and we were unable to recover it. 00:27:34.261 [2024-11-27 08:10:28.312454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.261 [2024-11-27 08:10:28.312510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.261 [2024-11-27 08:10:28.312525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.261 [2024-11-27 08:10:28.312532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.261 [2024-11-27 08:10:28.312538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.261 [2024-11-27 08:10:28.312554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.261 qpair failed and we were unable to recover it. 00:27:34.261 [2024-11-27 08:10:28.322480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.261 [2024-11-27 08:10:28.322535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.261 [2024-11-27 08:10:28.322550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.261 [2024-11-27 08:10:28.322557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.261 [2024-11-27 08:10:28.322563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.261 [2024-11-27 08:10:28.322578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.261 qpair failed and we were unable to recover it. 00:27:34.261 [2024-11-27 08:10:28.332518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.261 [2024-11-27 08:10:28.332578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.261 [2024-11-27 08:10:28.332592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.261 [2024-11-27 08:10:28.332600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.261 [2024-11-27 08:10:28.332607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.261 [2024-11-27 08:10:28.332622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.261 qpair failed and we were unable to recover it. 00:27:34.261 [2024-11-27 08:10:28.342525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.261 [2024-11-27 08:10:28.342589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.261 [2024-11-27 08:10:28.342603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.261 [2024-11-27 08:10:28.342610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.261 [2024-11-27 08:10:28.342617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.261 [2024-11-27 08:10:28.342635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.261 qpair failed and we were unable to recover it. 00:27:34.261 [2024-11-27 08:10:28.352499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.261 [2024-11-27 08:10:28.352558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.261 [2024-11-27 08:10:28.352572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.261 [2024-11-27 08:10:28.352580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.261 [2024-11-27 08:10:28.352586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.261 [2024-11-27 08:10:28.352601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.261 qpair failed and we were unable to recover it. 00:27:34.261 [2024-11-27 08:10:28.362620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.261 [2024-11-27 08:10:28.362695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.261 [2024-11-27 08:10:28.362709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.261 [2024-11-27 08:10:28.362716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.261 [2024-11-27 08:10:28.362722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.261 [2024-11-27 08:10:28.362738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.261 qpair failed and we were unable to recover it. 00:27:34.521 [2024-11-27 08:10:28.372555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.521 [2024-11-27 08:10:28.372615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.521 [2024-11-27 08:10:28.372629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.521 [2024-11-27 08:10:28.372636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.521 [2024-11-27 08:10:28.372642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.521 [2024-11-27 08:10:28.372658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.521 qpair failed and we were unable to recover it. 00:27:34.521 [2024-11-27 08:10:28.382667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.521 [2024-11-27 08:10:28.382724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.521 [2024-11-27 08:10:28.382738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.521 [2024-11-27 08:10:28.382745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.521 [2024-11-27 08:10:28.382752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.521 [2024-11-27 08:10:28.382767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.521 qpair failed and we were unable to recover it. 00:27:34.521 [2024-11-27 08:10:28.392674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.521 [2024-11-27 08:10:28.392731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.521 [2024-11-27 08:10:28.392746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.521 [2024-11-27 08:10:28.392753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.521 [2024-11-27 08:10:28.392759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.521 [2024-11-27 08:10:28.392774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.521 qpair failed and we were unable to recover it. 00:27:34.521 [2024-11-27 08:10:28.402709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.521 [2024-11-27 08:10:28.402765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.521 [2024-11-27 08:10:28.402780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.521 [2024-11-27 08:10:28.402788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.521 [2024-11-27 08:10:28.402795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.521 [2024-11-27 08:10:28.402810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.521 qpair failed and we were unable to recover it. 00:27:34.521 [2024-11-27 08:10:28.412786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.521 [2024-11-27 08:10:28.412862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.521 [2024-11-27 08:10:28.412877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.521 [2024-11-27 08:10:28.412885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.521 [2024-11-27 08:10:28.412892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.521 [2024-11-27 08:10:28.412907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.521 qpair failed and we were unable to recover it. 00:27:34.521 [2024-11-27 08:10:28.422798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.521 [2024-11-27 08:10:28.422906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.521 [2024-11-27 08:10:28.422920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.521 [2024-11-27 08:10:28.422928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.521 [2024-11-27 08:10:28.422935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.521 [2024-11-27 08:10:28.422955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.521 qpair failed and we were unable to recover it. 00:27:34.521 [2024-11-27 08:10:28.432763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.521 [2024-11-27 08:10:28.432820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.521 [2024-11-27 08:10:28.432834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.521 [2024-11-27 08:10:28.432847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.521 [2024-11-27 08:10:28.432853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.521 [2024-11-27 08:10:28.432868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.521 qpair failed and we were unable to recover it. 00:27:34.521 [2024-11-27 08:10:28.442815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.521 [2024-11-27 08:10:28.442875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.521 [2024-11-27 08:10:28.442889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.521 [2024-11-27 08:10:28.442896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.521 [2024-11-27 08:10:28.442903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.521 [2024-11-27 08:10:28.442918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.521 qpair failed and we were unable to recover it. 00:27:34.521 [2024-11-27 08:10:28.452861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.521 [2024-11-27 08:10:28.452920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.521 [2024-11-27 08:10:28.452934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.521 [2024-11-27 08:10:28.452942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.521 [2024-11-27 08:10:28.452952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.521 [2024-11-27 08:10:28.452969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.521 qpair failed and we were unable to recover it. 00:27:34.521 [2024-11-27 08:10:28.462921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.521 [2024-11-27 08:10:28.462980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.521 [2024-11-27 08:10:28.462995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.521 [2024-11-27 08:10:28.463002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.521 [2024-11-27 08:10:28.463009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.521 [2024-11-27 08:10:28.463025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.521 qpair failed and we were unable to recover it. 00:27:34.521 [2024-11-27 08:10:28.472915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.521 [2024-11-27 08:10:28.473012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.521 [2024-11-27 08:10:28.473027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.521 [2024-11-27 08:10:28.473034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.521 [2024-11-27 08:10:28.473040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.521 [2024-11-27 08:10:28.473059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.521 qpair failed and we were unable to recover it. 00:27:34.522 [2024-11-27 08:10:28.482957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.522 [2024-11-27 08:10:28.483013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.522 [2024-11-27 08:10:28.483027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.522 [2024-11-27 08:10:28.483033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.522 [2024-11-27 08:10:28.483040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.522 [2024-11-27 08:10:28.483057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.522 qpair failed and we were unable to recover it. 00:27:34.522 [2024-11-27 08:10:28.492910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.522 [2024-11-27 08:10:28.492984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.522 [2024-11-27 08:10:28.492998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.522 [2024-11-27 08:10:28.493005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.522 [2024-11-27 08:10:28.493011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.522 [2024-11-27 08:10:28.493027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.522 qpair failed and we were unable to recover it. 00:27:34.522 [2024-11-27 08:10:28.502935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.522 [2024-11-27 08:10:28.502997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.522 [2024-11-27 08:10:28.503011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.522 [2024-11-27 08:10:28.503019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.522 [2024-11-27 08:10:28.503026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.522 [2024-11-27 08:10:28.503042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.522 qpair failed and we were unable to recover it. 00:27:34.522 [2024-11-27 08:10:28.512976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.522 [2024-11-27 08:10:28.513034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.522 [2024-11-27 08:10:28.513048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.522 [2024-11-27 08:10:28.513056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.522 [2024-11-27 08:10:28.513062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.522 [2024-11-27 08:10:28.513077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.522 qpair failed and we were unable to recover it. 00:27:34.522 [2024-11-27 08:10:28.522984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.522 [2024-11-27 08:10:28.523042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.522 [2024-11-27 08:10:28.523056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.522 [2024-11-27 08:10:28.523064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.522 [2024-11-27 08:10:28.523070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.522 [2024-11-27 08:10:28.523086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.522 qpair failed and we were unable to recover it. 00:27:34.522 [2024-11-27 08:10:28.533034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.522 [2024-11-27 08:10:28.533093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.522 [2024-11-27 08:10:28.533107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.522 [2024-11-27 08:10:28.533114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.522 [2024-11-27 08:10:28.533121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.522 [2024-11-27 08:10:28.533136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.522 qpair failed and we were unable to recover it. 00:27:34.522 [2024-11-27 08:10:28.543127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.522 [2024-11-27 08:10:28.543186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.522 [2024-11-27 08:10:28.543200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.522 [2024-11-27 08:10:28.543207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.522 [2024-11-27 08:10:28.543213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.522 [2024-11-27 08:10:28.543228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.522 qpair failed and we were unable to recover it. 00:27:34.522 [2024-11-27 08:10:28.553138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.522 [2024-11-27 08:10:28.553196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.522 [2024-11-27 08:10:28.553210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.522 [2024-11-27 08:10:28.553216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.522 [2024-11-27 08:10:28.553223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.522 [2024-11-27 08:10:28.553239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.522 qpair failed and we were unable to recover it. 00:27:34.522 [2024-11-27 08:10:28.563222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.522 [2024-11-27 08:10:28.563307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.522 [2024-11-27 08:10:28.563324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.522 [2024-11-27 08:10:28.563331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.522 [2024-11-27 08:10:28.563337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.522 [2024-11-27 08:10:28.563352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.522 qpair failed and we were unable to recover it. 00:27:34.522 [2024-11-27 08:10:28.573157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.522 [2024-11-27 08:10:28.573229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.522 [2024-11-27 08:10:28.573243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.522 [2024-11-27 08:10:28.573250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.522 [2024-11-27 08:10:28.573256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.522 [2024-11-27 08:10:28.573271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.522 qpair failed and we were unable to recover it. 00:27:34.522 [2024-11-27 08:10:28.583247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.522 [2024-11-27 08:10:28.583306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.522 [2024-11-27 08:10:28.583320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.522 [2024-11-27 08:10:28.583327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.522 [2024-11-27 08:10:28.583333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.522 [2024-11-27 08:10:28.583348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.522 qpair failed and we were unable to recover it. 00:27:34.522 [2024-11-27 08:10:28.593270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.522 [2024-11-27 08:10:28.593325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.523 [2024-11-27 08:10:28.593339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.523 [2024-11-27 08:10:28.593346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.523 [2024-11-27 08:10:28.593352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.523 [2024-11-27 08:10:28.593368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.523 qpair failed and we were unable to recover it. 00:27:34.523 [2024-11-27 08:10:28.603267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.523 [2024-11-27 08:10:28.603319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.523 [2024-11-27 08:10:28.603333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.523 [2024-11-27 08:10:28.603340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.523 [2024-11-27 08:10:28.603351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.523 [2024-11-27 08:10:28.603367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.523 qpair failed and we were unable to recover it. 00:27:34.523 [2024-11-27 08:10:28.613316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.523 [2024-11-27 08:10:28.613372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.523 [2024-11-27 08:10:28.613386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.523 [2024-11-27 08:10:28.613393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.523 [2024-11-27 08:10:28.613400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.523 [2024-11-27 08:10:28.613415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.523 qpair failed and we were unable to recover it. 00:27:34.523 [2024-11-27 08:10:28.623325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.523 [2024-11-27 08:10:28.623380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.523 [2024-11-27 08:10:28.623394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.523 [2024-11-27 08:10:28.623401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.523 [2024-11-27 08:10:28.623408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.523 [2024-11-27 08:10:28.623423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.523 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-27 08:10:28.633371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.782 [2024-11-27 08:10:28.633430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.782 [2024-11-27 08:10:28.633444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.782 [2024-11-27 08:10:28.633452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.782 [2024-11-27 08:10:28.633458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.782 [2024-11-27 08:10:28.633473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-27 08:10:28.643417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.782 [2024-11-27 08:10:28.643479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.782 [2024-11-27 08:10:28.643493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.782 [2024-11-27 08:10:28.643500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.782 [2024-11-27 08:10:28.643506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.782 [2024-11-27 08:10:28.643522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-27 08:10:28.653502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.782 [2024-11-27 08:10:28.653577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.782 [2024-11-27 08:10:28.653591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.782 [2024-11-27 08:10:28.653598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.782 [2024-11-27 08:10:28.653604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.782 [2024-11-27 08:10:28.653619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-27 08:10:28.663492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.782 [2024-11-27 08:10:28.663551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.782 [2024-11-27 08:10:28.663567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.782 [2024-11-27 08:10:28.663575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.782 [2024-11-27 08:10:28.663582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.782 [2024-11-27 08:10:28.663598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-27 08:10:28.673459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.782 [2024-11-27 08:10:28.673514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.782 [2024-11-27 08:10:28.673528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.782 [2024-11-27 08:10:28.673535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.782 [2024-11-27 08:10:28.673542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.782 [2024-11-27 08:10:28.673557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-27 08:10:28.683509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.782 [2024-11-27 08:10:28.683564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.782 [2024-11-27 08:10:28.683578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.782 [2024-11-27 08:10:28.683585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.782 [2024-11-27 08:10:28.683591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.782 [2024-11-27 08:10:28.683607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-27 08:10:28.693549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.782 [2024-11-27 08:10:28.693615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.782 [2024-11-27 08:10:28.693634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.782 [2024-11-27 08:10:28.693642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.782 [2024-11-27 08:10:28.693649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.782 [2024-11-27 08:10:28.693665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.782 qpair failed and we were unable to recover it. 00:27:34.782 [2024-11-27 08:10:28.703607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.782 [2024-11-27 08:10:28.703666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.782 [2024-11-27 08:10:28.703680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.782 [2024-11-27 08:10:28.703687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.783 [2024-11-27 08:10:28.703694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.783 [2024-11-27 08:10:28.703709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-27 08:10:28.713575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.783 [2024-11-27 08:10:28.713631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.783 [2024-11-27 08:10:28.713645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.783 [2024-11-27 08:10:28.713653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.783 [2024-11-27 08:10:28.713660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.783 [2024-11-27 08:10:28.713675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-27 08:10:28.723625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.783 [2024-11-27 08:10:28.723684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.783 [2024-11-27 08:10:28.723699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.783 [2024-11-27 08:10:28.723706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.783 [2024-11-27 08:10:28.723713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.783 [2024-11-27 08:10:28.723728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-27 08:10:28.733666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.783 [2024-11-27 08:10:28.733729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.783 [2024-11-27 08:10:28.733744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.783 [2024-11-27 08:10:28.733752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.783 [2024-11-27 08:10:28.733762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.783 [2024-11-27 08:10:28.733778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-27 08:10:28.743691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.783 [2024-11-27 08:10:28.743760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.783 [2024-11-27 08:10:28.743776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.783 [2024-11-27 08:10:28.743784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.783 [2024-11-27 08:10:28.743790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.783 [2024-11-27 08:10:28.743806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-27 08:10:28.753704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.783 [2024-11-27 08:10:28.753766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.783 [2024-11-27 08:10:28.753780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.783 [2024-11-27 08:10:28.753787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.783 [2024-11-27 08:10:28.753794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.783 [2024-11-27 08:10:28.753810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-27 08:10:28.763740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.783 [2024-11-27 08:10:28.763795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.783 [2024-11-27 08:10:28.763811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.783 [2024-11-27 08:10:28.763819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.783 [2024-11-27 08:10:28.763826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.783 [2024-11-27 08:10:28.763842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-27 08:10:28.773773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.783 [2024-11-27 08:10:28.773831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.783 [2024-11-27 08:10:28.773845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.783 [2024-11-27 08:10:28.773852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.783 [2024-11-27 08:10:28.773859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.783 [2024-11-27 08:10:28.773874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-27 08:10:28.783801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.783 [2024-11-27 08:10:28.783859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.783 [2024-11-27 08:10:28.783873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.783 [2024-11-27 08:10:28.783881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.783 [2024-11-27 08:10:28.783888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.783 [2024-11-27 08:10:28.783903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-27 08:10:28.793862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.783 [2024-11-27 08:10:28.793917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.783 [2024-11-27 08:10:28.793932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.783 [2024-11-27 08:10:28.793939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.783 [2024-11-27 08:10:28.793950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.783 [2024-11-27 08:10:28.793966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-27 08:10:28.803844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.783 [2024-11-27 08:10:28.803956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.783 [2024-11-27 08:10:28.803971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.783 [2024-11-27 08:10:28.803978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.783 [2024-11-27 08:10:28.803985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.783 [2024-11-27 08:10:28.804000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-27 08:10:28.813886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.783 [2024-11-27 08:10:28.813949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.783 [2024-11-27 08:10:28.813963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.783 [2024-11-27 08:10:28.813971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.783 [2024-11-27 08:10:28.813978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.783 [2024-11-27 08:10:28.813993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-27 08:10:28.823907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.783 [2024-11-27 08:10:28.823970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.783 [2024-11-27 08:10:28.823987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.783 [2024-11-27 08:10:28.823995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.783 [2024-11-27 08:10:28.824001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.783 [2024-11-27 08:10:28.824017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.783 qpair failed and we were unable to recover it. 00:27:34.783 [2024-11-27 08:10:28.833921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.783 [2024-11-27 08:10:28.833982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.783 [2024-11-27 08:10:28.833996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.783 [2024-11-27 08:10:28.834004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.783 [2024-11-27 08:10:28.834011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.784 [2024-11-27 08:10:28.834026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-27 08:10:28.843990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.784 [2024-11-27 08:10:28.844046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.784 [2024-11-27 08:10:28.844060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.784 [2024-11-27 08:10:28.844068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.784 [2024-11-27 08:10:28.844075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.784 [2024-11-27 08:10:28.844090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-27 08:10:28.853995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.784 [2024-11-27 08:10:28.854076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.784 [2024-11-27 08:10:28.854090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.784 [2024-11-27 08:10:28.854097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.784 [2024-11-27 08:10:28.854103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.784 [2024-11-27 08:10:28.854118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-27 08:10:28.864029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.784 [2024-11-27 08:10:28.864088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.784 [2024-11-27 08:10:28.864102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.784 [2024-11-27 08:10:28.864112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.784 [2024-11-27 08:10:28.864119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.784 [2024-11-27 08:10:28.864135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-27 08:10:28.874041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.784 [2024-11-27 08:10:28.874098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.784 [2024-11-27 08:10:28.874113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.784 [2024-11-27 08:10:28.874120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.784 [2024-11-27 08:10:28.874127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.784 [2024-11-27 08:10:28.874143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.784 qpair failed and we were unable to recover it. 00:27:34.784 [2024-11-27 08:10:28.884080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.784 [2024-11-27 08:10:28.884159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.784 [2024-11-27 08:10:28.884173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.784 [2024-11-27 08:10:28.884181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.784 [2024-11-27 08:10:28.884187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:34.784 [2024-11-27 08:10:28.884203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.784 qpair failed and we were unable to recover it. 00:27:35.043 [2024-11-27 08:10:28.894115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.043 [2024-11-27 08:10:28.894172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.043 [2024-11-27 08:10:28.894186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.043 [2024-11-27 08:10:28.894193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.043 [2024-11-27 08:10:28.894201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.043 [2024-11-27 08:10:28.894216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-11-27 08:10:28.904143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.043 [2024-11-27 08:10:28.904203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.043 [2024-11-27 08:10:28.904217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.043 [2024-11-27 08:10:28.904224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.043 [2024-11-27 08:10:28.904230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.043 [2024-11-27 08:10:28.904249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-11-27 08:10:28.914178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.043 [2024-11-27 08:10:28.914231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.043 [2024-11-27 08:10:28.914245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.043 [2024-11-27 08:10:28.914252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.043 [2024-11-27 08:10:28.914259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.043 [2024-11-27 08:10:28.914274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.043 qpair failed and we were unable to recover it. 00:27:35.043 [2024-11-27 08:10:28.924211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.043 [2024-11-27 08:10:28.924271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.044 [2024-11-27 08:10:28.924284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.044 [2024-11-27 08:10:28.924292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.044 [2024-11-27 08:10:28.924298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.044 [2024-11-27 08:10:28.924313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-11-27 08:10:28.934229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.044 [2024-11-27 08:10:28.934293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.044 [2024-11-27 08:10:28.934308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.044 [2024-11-27 08:10:28.934316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.044 [2024-11-27 08:10:28.934323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.044 [2024-11-27 08:10:28.934338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-11-27 08:10:28.944319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.044 [2024-11-27 08:10:28.944377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.044 [2024-11-27 08:10:28.944392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.044 [2024-11-27 08:10:28.944399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.044 [2024-11-27 08:10:28.944407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.044 [2024-11-27 08:10:28.944423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-11-27 08:10:28.954289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.044 [2024-11-27 08:10:28.954347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.044 [2024-11-27 08:10:28.954360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.044 [2024-11-27 08:10:28.954368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.044 [2024-11-27 08:10:28.954374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.044 [2024-11-27 08:10:28.954390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-11-27 08:10:28.964311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.044 [2024-11-27 08:10:28.964365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.044 [2024-11-27 08:10:28.964379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.044 [2024-11-27 08:10:28.964386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.044 [2024-11-27 08:10:28.964393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.044 [2024-11-27 08:10:28.964408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-11-27 08:10:28.974351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.044 [2024-11-27 08:10:28.974410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.044 [2024-11-27 08:10:28.974424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.044 [2024-11-27 08:10:28.974431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.044 [2024-11-27 08:10:28.974438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.044 [2024-11-27 08:10:28.974452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-11-27 08:10:28.984379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.044 [2024-11-27 08:10:28.984440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.044 [2024-11-27 08:10:28.984453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.044 [2024-11-27 08:10:28.984461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.044 [2024-11-27 08:10:28.984467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.044 [2024-11-27 08:10:28.984482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-11-27 08:10:28.994394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.044 [2024-11-27 08:10:28.994451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.044 [2024-11-27 08:10:28.994465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.044 [2024-11-27 08:10:28.994475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.044 [2024-11-27 08:10:28.994482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.044 [2024-11-27 08:10:28.994497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-11-27 08:10:29.004380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.044 [2024-11-27 08:10:29.004470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.044 [2024-11-27 08:10:29.004483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.044 [2024-11-27 08:10:29.004491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.044 [2024-11-27 08:10:29.004498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.044 [2024-11-27 08:10:29.004513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-11-27 08:10:29.014474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.044 [2024-11-27 08:10:29.014541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.044 [2024-11-27 08:10:29.014555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.044 [2024-11-27 08:10:29.014562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.044 [2024-11-27 08:10:29.014569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.044 [2024-11-27 08:10:29.014584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-11-27 08:10:29.024516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.044 [2024-11-27 08:10:29.024575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.044 [2024-11-27 08:10:29.024589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.044 [2024-11-27 08:10:29.024597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.044 [2024-11-27 08:10:29.024604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.044 [2024-11-27 08:10:29.024619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-11-27 08:10:29.034557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.044 [2024-11-27 08:10:29.034609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.044 [2024-11-27 08:10:29.034623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.044 [2024-11-27 08:10:29.034631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.044 [2024-11-27 08:10:29.034638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.044 [2024-11-27 08:10:29.034656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-11-27 08:10:29.044546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.044 [2024-11-27 08:10:29.044604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.044 [2024-11-27 08:10:29.044617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.044 [2024-11-27 08:10:29.044625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.044 [2024-11-27 08:10:29.044632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.044 [2024-11-27 08:10:29.044646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.044 qpair failed and we were unable to recover it. 00:27:35.044 [2024-11-27 08:10:29.054602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.045 [2024-11-27 08:10:29.054667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.045 [2024-11-27 08:10:29.054681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.045 [2024-11-27 08:10:29.054688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.045 [2024-11-27 08:10:29.054695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.045 [2024-11-27 08:10:29.054711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-11-27 08:10:29.064635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.045 [2024-11-27 08:10:29.064694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.045 [2024-11-27 08:10:29.064708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.045 [2024-11-27 08:10:29.064715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.045 [2024-11-27 08:10:29.064722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.045 [2024-11-27 08:10:29.064738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-11-27 08:10:29.074620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.045 [2024-11-27 08:10:29.074681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.045 [2024-11-27 08:10:29.074695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.045 [2024-11-27 08:10:29.074703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.045 [2024-11-27 08:10:29.074710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.045 [2024-11-27 08:10:29.074725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-11-27 08:10:29.084669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.045 [2024-11-27 08:10:29.084725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.045 [2024-11-27 08:10:29.084739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.045 [2024-11-27 08:10:29.084747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.045 [2024-11-27 08:10:29.084754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.045 [2024-11-27 08:10:29.084769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-11-27 08:10:29.094716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.045 [2024-11-27 08:10:29.094778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.045 [2024-11-27 08:10:29.094792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.045 [2024-11-27 08:10:29.094800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.045 [2024-11-27 08:10:29.094807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.045 [2024-11-27 08:10:29.094822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-11-27 08:10:29.104755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.045 [2024-11-27 08:10:29.104814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.045 [2024-11-27 08:10:29.104829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.045 [2024-11-27 08:10:29.104836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.045 [2024-11-27 08:10:29.104844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.045 [2024-11-27 08:10:29.104860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-11-27 08:10:29.114785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.045 [2024-11-27 08:10:29.114841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.045 [2024-11-27 08:10:29.114856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.045 [2024-11-27 08:10:29.114863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.045 [2024-11-27 08:10:29.114870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.045 [2024-11-27 08:10:29.114885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-11-27 08:10:29.124788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.045 [2024-11-27 08:10:29.124846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.045 [2024-11-27 08:10:29.124865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.045 [2024-11-27 08:10:29.124873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.045 [2024-11-27 08:10:29.124879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.045 [2024-11-27 08:10:29.124895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-11-27 08:10:29.134864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.045 [2024-11-27 08:10:29.134926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.045 [2024-11-27 08:10:29.134941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.045 [2024-11-27 08:10:29.134952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.045 [2024-11-27 08:10:29.134960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.045 [2024-11-27 08:10:29.134975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.045 [2024-11-27 08:10:29.144905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.045 [2024-11-27 08:10:29.144968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.045 [2024-11-27 08:10:29.144983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.045 [2024-11-27 08:10:29.144990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.045 [2024-11-27 08:10:29.144997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.045 [2024-11-27 08:10:29.145012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.045 qpair failed and we were unable to recover it. 00:27:35.309 [2024-11-27 08:10:29.154886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.309 [2024-11-27 08:10:29.154945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.309 [2024-11-27 08:10:29.154965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.309 [2024-11-27 08:10:29.154972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.309 [2024-11-27 08:10:29.154979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.309 [2024-11-27 08:10:29.154995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-11-27 08:10:29.164908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.309 [2024-11-27 08:10:29.164986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.309 [2024-11-27 08:10:29.165001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.309 [2024-11-27 08:10:29.165008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.309 [2024-11-27 08:10:29.165018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.309 [2024-11-27 08:10:29.165034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-11-27 08:10:29.174973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.309 [2024-11-27 08:10:29.175035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.309 [2024-11-27 08:10:29.175049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.309 [2024-11-27 08:10:29.175057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.309 [2024-11-27 08:10:29.175064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.309 [2024-11-27 08:10:29.175079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-11-27 08:10:29.184970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.309 [2024-11-27 08:10:29.185029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.309 [2024-11-27 08:10:29.185043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.309 [2024-11-27 08:10:29.185050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.309 [2024-11-27 08:10:29.185057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.310 [2024-11-27 08:10:29.185073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-11-27 08:10:29.194983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.310 [2024-11-27 08:10:29.195040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.310 [2024-11-27 08:10:29.195054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.310 [2024-11-27 08:10:29.195061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.310 [2024-11-27 08:10:29.195068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.310 [2024-11-27 08:10:29.195083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-11-27 08:10:29.205016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.310 [2024-11-27 08:10:29.205078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.310 [2024-11-27 08:10:29.205092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.310 [2024-11-27 08:10:29.205100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.310 [2024-11-27 08:10:29.205106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.310 [2024-11-27 08:10:29.205122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-11-27 08:10:29.215053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.310 [2024-11-27 08:10:29.215114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.310 [2024-11-27 08:10:29.215128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.310 [2024-11-27 08:10:29.215136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.310 [2024-11-27 08:10:29.215143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.310 [2024-11-27 08:10:29.215159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-11-27 08:10:29.225097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.310 [2024-11-27 08:10:29.225153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.310 [2024-11-27 08:10:29.225167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.310 [2024-11-27 08:10:29.225174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.310 [2024-11-27 08:10:29.225182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.310 [2024-11-27 08:10:29.225197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-11-27 08:10:29.235099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.310 [2024-11-27 08:10:29.235156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.310 [2024-11-27 08:10:29.235170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.310 [2024-11-27 08:10:29.235178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.310 [2024-11-27 08:10:29.235185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.311 [2024-11-27 08:10:29.235200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-11-27 08:10:29.245131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.311 [2024-11-27 08:10:29.245186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.311 [2024-11-27 08:10:29.245200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.311 [2024-11-27 08:10:29.245208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.311 [2024-11-27 08:10:29.245216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.311 [2024-11-27 08:10:29.245231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-11-27 08:10:29.255189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.311 [2024-11-27 08:10:29.255253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.311 [2024-11-27 08:10:29.255270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.311 [2024-11-27 08:10:29.255278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.311 [2024-11-27 08:10:29.255284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.311 [2024-11-27 08:10:29.255300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-11-27 08:10:29.265188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.311 [2024-11-27 08:10:29.265249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.311 [2024-11-27 08:10:29.265264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.311 [2024-11-27 08:10:29.265272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.311 [2024-11-27 08:10:29.265279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.311 [2024-11-27 08:10:29.265294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-11-27 08:10:29.275221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.312 [2024-11-27 08:10:29.275281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.312 [2024-11-27 08:10:29.275296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.312 [2024-11-27 08:10:29.275303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.312 [2024-11-27 08:10:29.275310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.312 [2024-11-27 08:10:29.275326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-11-27 08:10:29.285237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.312 [2024-11-27 08:10:29.285294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.312 [2024-11-27 08:10:29.285310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.312 [2024-11-27 08:10:29.285318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.312 [2024-11-27 08:10:29.285325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.312 [2024-11-27 08:10:29.285341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-11-27 08:10:29.295275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.312 [2024-11-27 08:10:29.295351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.312 [2024-11-27 08:10:29.295366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.312 [2024-11-27 08:10:29.295373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.312 [2024-11-27 08:10:29.295383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.312 [2024-11-27 08:10:29.295399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-11-27 08:10:29.305318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.312 [2024-11-27 08:10:29.305395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.312 [2024-11-27 08:10:29.305410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.312 [2024-11-27 08:10:29.305417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.312 [2024-11-27 08:10:29.305423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.312 [2024-11-27 08:10:29.305439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-11-27 08:10:29.315314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.312 [2024-11-27 08:10:29.315371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.312 [2024-11-27 08:10:29.315385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.313 [2024-11-27 08:10:29.315392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.313 [2024-11-27 08:10:29.315399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.313 [2024-11-27 08:10:29.315414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-11-27 08:10:29.325371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.313 [2024-11-27 08:10:29.325424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.313 [2024-11-27 08:10:29.325437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.313 [2024-11-27 08:10:29.325445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.313 [2024-11-27 08:10:29.325451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.313 [2024-11-27 08:10:29.325466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-11-27 08:10:29.335403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.313 [2024-11-27 08:10:29.335481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.313 [2024-11-27 08:10:29.335496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.313 [2024-11-27 08:10:29.335503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.313 [2024-11-27 08:10:29.335510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.313 [2024-11-27 08:10:29.335525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-11-27 08:10:29.345400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.313 [2024-11-27 08:10:29.345459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.313 [2024-11-27 08:10:29.345473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.313 [2024-11-27 08:10:29.345481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.313 [2024-11-27 08:10:29.345488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.313 [2024-11-27 08:10:29.345503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-11-27 08:10:29.355429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.313 [2024-11-27 08:10:29.355491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.313 [2024-11-27 08:10:29.355505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.313 [2024-11-27 08:10:29.355512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.313 [2024-11-27 08:10:29.355520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.313 [2024-11-27 08:10:29.355536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-11-27 08:10:29.365440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.313 [2024-11-27 08:10:29.365538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.313 [2024-11-27 08:10:29.365552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.313 [2024-11-27 08:10:29.365559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.313 [2024-11-27 08:10:29.365565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.314 [2024-11-27 08:10:29.365581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-11-27 08:10:29.375518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.314 [2024-11-27 08:10:29.375592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.314 [2024-11-27 08:10:29.375607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.314 [2024-11-27 08:10:29.375614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.314 [2024-11-27 08:10:29.375620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.314 [2024-11-27 08:10:29.375635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-11-27 08:10:29.385524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.314 [2024-11-27 08:10:29.385579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.314 [2024-11-27 08:10:29.385596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.314 [2024-11-27 08:10:29.385603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.314 [2024-11-27 08:10:29.385610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.314 [2024-11-27 08:10:29.385625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-11-27 08:10:29.395555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.314 [2024-11-27 08:10:29.395612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.314 [2024-11-27 08:10:29.395626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.314 [2024-11-27 08:10:29.395633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.314 [2024-11-27 08:10:29.395639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.314 [2024-11-27 08:10:29.395655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-11-27 08:10:29.405593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.315 [2024-11-27 08:10:29.405686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.315 [2024-11-27 08:10:29.405700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.315 [2024-11-27 08:10:29.405707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.315 [2024-11-27 08:10:29.405713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.315 [2024-11-27 08:10:29.405729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.588 [2024-11-27 08:10:29.415623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.588 [2024-11-27 08:10:29.415682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.588 [2024-11-27 08:10:29.415697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.588 [2024-11-27 08:10:29.415704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.588 [2024-11-27 08:10:29.415711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.588 [2024-11-27 08:10:29.415726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.588 qpair failed and we were unable to recover it. 00:27:35.588 [2024-11-27 08:10:29.425683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.588 [2024-11-27 08:10:29.425739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.588 [2024-11-27 08:10:29.425753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.588 [2024-11-27 08:10:29.425764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.588 [2024-11-27 08:10:29.425771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.588 [2024-11-27 08:10:29.425786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.588 qpair failed and we were unable to recover it. 00:27:35.588 [2024-11-27 08:10:29.435666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.588 [2024-11-27 08:10:29.435717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.588 [2024-11-27 08:10:29.435731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.588 [2024-11-27 08:10:29.435738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.588 [2024-11-27 08:10:29.435744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.588 [2024-11-27 08:10:29.435759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.588 qpair failed and we were unable to recover it. 00:27:35.588 [2024-11-27 08:10:29.445633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.588 [2024-11-27 08:10:29.445691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.588 [2024-11-27 08:10:29.445705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.588 [2024-11-27 08:10:29.445713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.588 [2024-11-27 08:10:29.445719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.588 [2024-11-27 08:10:29.445733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.588 qpair failed and we were unable to recover it. 00:27:35.588 [2024-11-27 08:10:29.455738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.588 [2024-11-27 08:10:29.455801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.588 [2024-11-27 08:10:29.455816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.588 [2024-11-27 08:10:29.455824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.588 [2024-11-27 08:10:29.455830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.588 [2024-11-27 08:10:29.455845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.588 qpair failed and we were unable to recover it. 00:27:35.588 [2024-11-27 08:10:29.465698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.588 [2024-11-27 08:10:29.465754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.588 [2024-11-27 08:10:29.465768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.588 [2024-11-27 08:10:29.465775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.588 [2024-11-27 08:10:29.465782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.588 [2024-11-27 08:10:29.465800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.588 qpair failed and we were unable to recover it. 00:27:35.588 [2024-11-27 08:10:29.475793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.588 [2024-11-27 08:10:29.475846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.588 [2024-11-27 08:10:29.475860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.588 [2024-11-27 08:10:29.475868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.588 [2024-11-27 08:10:29.475874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.588 [2024-11-27 08:10:29.475890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.588 qpair failed and we were unable to recover it. 00:27:35.588 [2024-11-27 08:10:29.485825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.588 [2024-11-27 08:10:29.485880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.588 [2024-11-27 08:10:29.485894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.588 [2024-11-27 08:10:29.485901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.588 [2024-11-27 08:10:29.485908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.588 [2024-11-27 08:10:29.485924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.588 qpair failed and we were unable to recover it. 00:27:35.588 [2024-11-27 08:10:29.495861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.588 [2024-11-27 08:10:29.495918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.588 [2024-11-27 08:10:29.495932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.588 [2024-11-27 08:10:29.495939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.588 [2024-11-27 08:10:29.495950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.588 [2024-11-27 08:10:29.495966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.588 qpair failed and we were unable to recover it. 00:27:35.588 [2024-11-27 08:10:29.505881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.588 [2024-11-27 08:10:29.505942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.588 [2024-11-27 08:10:29.505960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.588 [2024-11-27 08:10:29.505967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.588 [2024-11-27 08:10:29.505974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.588 [2024-11-27 08:10:29.505989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.588 qpair failed and we were unable to recover it. 00:27:35.588 [2024-11-27 08:10:29.515903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.588 [2024-11-27 08:10:29.515975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.588 [2024-11-27 08:10:29.515989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.588 [2024-11-27 08:10:29.515997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.588 [2024-11-27 08:10:29.516003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.588 [2024-11-27 08:10:29.516019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.588 qpair failed and we were unable to recover it. 00:27:35.588 [2024-11-27 08:10:29.525967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.588 [2024-11-27 08:10:29.526025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.588 [2024-11-27 08:10:29.526039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.588 [2024-11-27 08:10:29.526047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.589 [2024-11-27 08:10:29.526053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.589 [2024-11-27 08:10:29.526069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.589 qpair failed and we were unable to recover it. 00:27:35.589 [2024-11-27 08:10:29.535998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.589 [2024-11-27 08:10:29.536059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.589 [2024-11-27 08:10:29.536072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.589 [2024-11-27 08:10:29.536080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.589 [2024-11-27 08:10:29.536087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.589 [2024-11-27 08:10:29.536102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.589 qpair failed and we were unable to recover it. 00:27:35.589 [2024-11-27 08:10:29.545984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.589 [2024-11-27 08:10:29.546040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.589 [2024-11-27 08:10:29.546054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.589 [2024-11-27 08:10:29.546061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.589 [2024-11-27 08:10:29.546068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.589 [2024-11-27 08:10:29.546083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.589 qpair failed and we were unable to recover it. 00:27:35.589 [2024-11-27 08:10:29.556021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.589 [2024-11-27 08:10:29.556079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.589 [2024-11-27 08:10:29.556093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.589 [2024-11-27 08:10:29.556104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.589 [2024-11-27 08:10:29.556110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.589 [2024-11-27 08:10:29.556125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.589 qpair failed and we were unable to recover it. 00:27:35.589 [2024-11-27 08:10:29.566041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.589 [2024-11-27 08:10:29.566104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.589 [2024-11-27 08:10:29.566118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.589 [2024-11-27 08:10:29.566125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.589 [2024-11-27 08:10:29.566132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.589 [2024-11-27 08:10:29.566147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.589 qpair failed and we were unable to recover it. 00:27:35.589 [2024-11-27 08:10:29.576091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.589 [2024-11-27 08:10:29.576151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.589 [2024-11-27 08:10:29.576166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.589 [2024-11-27 08:10:29.576173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.589 [2024-11-27 08:10:29.576180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.589 [2024-11-27 08:10:29.576195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.589 qpair failed and we were unable to recover it. 00:27:35.589 [2024-11-27 08:10:29.586108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.589 [2024-11-27 08:10:29.586189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.589 [2024-11-27 08:10:29.586204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.589 [2024-11-27 08:10:29.586211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.589 [2024-11-27 08:10:29.586218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.589 [2024-11-27 08:10:29.586233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.589 qpair failed and we were unable to recover it. 00:27:35.589 [2024-11-27 08:10:29.596131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.589 [2024-11-27 08:10:29.596219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.589 [2024-11-27 08:10:29.596233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.589 [2024-11-27 08:10:29.596240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.589 [2024-11-27 08:10:29.596247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.589 [2024-11-27 08:10:29.596265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.589 qpair failed and we were unable to recover it. 00:27:35.589 [2024-11-27 08:10:29.606170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.589 [2024-11-27 08:10:29.606279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.589 [2024-11-27 08:10:29.606293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.589 [2024-11-27 08:10:29.606300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.589 [2024-11-27 08:10:29.606308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.589 [2024-11-27 08:10:29.606324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.589 qpair failed and we were unable to recover it. 00:27:35.589 [2024-11-27 08:10:29.616143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.589 [2024-11-27 08:10:29.616207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.589 [2024-11-27 08:10:29.616221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.589 [2024-11-27 08:10:29.616229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.589 [2024-11-27 08:10:29.616236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.589 [2024-11-27 08:10:29.616251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.589 qpair failed and we were unable to recover it. 00:27:35.589 [2024-11-27 08:10:29.626228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.589 [2024-11-27 08:10:29.626288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.589 [2024-11-27 08:10:29.626302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.589 [2024-11-27 08:10:29.626310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.589 [2024-11-27 08:10:29.626316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.589 [2024-11-27 08:10:29.626331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.589 qpair failed and we were unable to recover it. 00:27:35.589 [2024-11-27 08:10:29.636235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.589 [2024-11-27 08:10:29.636297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.589 [2024-11-27 08:10:29.636311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.589 [2024-11-27 08:10:29.636318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.589 [2024-11-27 08:10:29.636324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.589 [2024-11-27 08:10:29.636340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.589 qpair failed and we were unable to recover it. 00:27:35.589 [2024-11-27 08:10:29.646249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.589 [2024-11-27 08:10:29.646322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.589 [2024-11-27 08:10:29.646336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.589 [2024-11-27 08:10:29.646343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.589 [2024-11-27 08:10:29.646349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.589 [2024-11-27 08:10:29.646364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.589 qpair failed and we were unable to recover it. 00:27:35.589 [2024-11-27 08:10:29.656301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.589 [2024-11-27 08:10:29.656380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.589 [2024-11-27 08:10:29.656395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.589 [2024-11-27 08:10:29.656402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.589 [2024-11-27 08:10:29.656408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.590 [2024-11-27 08:10:29.656423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.590 qpair failed and we were unable to recover it. 00:27:35.590 [2024-11-27 08:10:29.666352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.590 [2024-11-27 08:10:29.666414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.590 [2024-11-27 08:10:29.666428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.590 [2024-11-27 08:10:29.666435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.590 [2024-11-27 08:10:29.666441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.590 [2024-11-27 08:10:29.666457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.590 qpair failed and we were unable to recover it. 00:27:35.590 [2024-11-27 08:10:29.676315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.590 [2024-11-27 08:10:29.676396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.590 [2024-11-27 08:10:29.676410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.590 [2024-11-27 08:10:29.676417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.590 [2024-11-27 08:10:29.676424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.590 [2024-11-27 08:10:29.676439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.590 qpair failed and we were unable to recover it. 00:27:35.590 [2024-11-27 08:10:29.686372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.590 [2024-11-27 08:10:29.686429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.590 [2024-11-27 08:10:29.686447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.590 [2024-11-27 08:10:29.686454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.590 [2024-11-27 08:10:29.686461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.590 [2024-11-27 08:10:29.686477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.590 qpair failed and we were unable to recover it. 00:27:35.848 [2024-11-27 08:10:29.696468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.848 [2024-11-27 08:10:29.696529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.848 [2024-11-27 08:10:29.696543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.848 [2024-11-27 08:10:29.696551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.849 [2024-11-27 08:10:29.696557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.849 [2024-11-27 08:10:29.696573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.849 qpair failed and we were unable to recover it. 00:27:35.849 [2024-11-27 08:10:29.706405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.849 [2024-11-27 08:10:29.706465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.849 [2024-11-27 08:10:29.706479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.849 [2024-11-27 08:10:29.706487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.849 [2024-11-27 08:10:29.706494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.849 [2024-11-27 08:10:29.706509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.849 qpair failed and we were unable to recover it. 00:27:35.849 [2024-11-27 08:10:29.716506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.849 [2024-11-27 08:10:29.716562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.849 [2024-11-27 08:10:29.716575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.849 [2024-11-27 08:10:29.716582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.849 [2024-11-27 08:10:29.716588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.849 [2024-11-27 08:10:29.716604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.849 qpair failed and we were unable to recover it. 00:27:35.849 [2024-11-27 08:10:29.726486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.849 [2024-11-27 08:10:29.726543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.849 [2024-11-27 08:10:29.726559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.849 [2024-11-27 08:10:29.726567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.849 [2024-11-27 08:10:29.726578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.849 [2024-11-27 08:10:29.726595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.849 qpair failed and we were unable to recover it. 00:27:35.849 [2024-11-27 08:10:29.736488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.849 [2024-11-27 08:10:29.736568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.849 [2024-11-27 08:10:29.736582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.849 [2024-11-27 08:10:29.736589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.849 [2024-11-27 08:10:29.736595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.849 [2024-11-27 08:10:29.736610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.849 qpair failed and we were unable to recover it. 00:27:35.849 [2024-11-27 08:10:29.746558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.849 [2024-11-27 08:10:29.746625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.849 [2024-11-27 08:10:29.746640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.849 [2024-11-27 08:10:29.746648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.849 [2024-11-27 08:10:29.746655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.849 [2024-11-27 08:10:29.746670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.849 qpair failed and we were unable to recover it. 00:27:35.849 [2024-11-27 08:10:29.756528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.849 [2024-11-27 08:10:29.756590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.849 [2024-11-27 08:10:29.756604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.849 [2024-11-27 08:10:29.756612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.849 [2024-11-27 08:10:29.756618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.849 [2024-11-27 08:10:29.756634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.849 qpair failed and we were unable to recover it. 00:27:35.849 [2024-11-27 08:10:29.766590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.849 [2024-11-27 08:10:29.766658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.849 [2024-11-27 08:10:29.766672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.849 [2024-11-27 08:10:29.766679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.849 [2024-11-27 08:10:29.766686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.849 [2024-11-27 08:10:29.766701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.849 qpair failed and we were unable to recover it. 00:27:35.849 [2024-11-27 08:10:29.776702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.849 [2024-11-27 08:10:29.776763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.849 [2024-11-27 08:10:29.776776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.849 [2024-11-27 08:10:29.776783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.849 [2024-11-27 08:10:29.776790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.849 [2024-11-27 08:10:29.776804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.849 qpair failed and we were unable to recover it. 00:27:35.849 [2024-11-27 08:10:29.786674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.849 [2024-11-27 08:10:29.786737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.849 [2024-11-27 08:10:29.786753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.849 [2024-11-27 08:10:29.786761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.849 [2024-11-27 08:10:29.786768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.849 [2024-11-27 08:10:29.786783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.849 qpair failed and we were unable to recover it. 00:27:35.849 [2024-11-27 08:10:29.796700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.849 [2024-11-27 08:10:29.796756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.849 [2024-11-27 08:10:29.796770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.849 [2024-11-27 08:10:29.796777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.849 [2024-11-27 08:10:29.796784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.849 [2024-11-27 08:10:29.796800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.849 qpair failed and we were unable to recover it. 00:27:35.849 [2024-11-27 08:10:29.806697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.849 [2024-11-27 08:10:29.806769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.849 [2024-11-27 08:10:29.806783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.849 [2024-11-27 08:10:29.806790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.849 [2024-11-27 08:10:29.806796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.849 [2024-11-27 08:10:29.806812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.849 qpair failed and we were unable to recover it. 00:27:35.849 [2024-11-27 08:10:29.816767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.849 [2024-11-27 08:10:29.816828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.849 [2024-11-27 08:10:29.816848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.849 [2024-11-27 08:10:29.816855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.849 [2024-11-27 08:10:29.816861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.849 [2024-11-27 08:10:29.816877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.849 qpair failed and we were unable to recover it. 00:27:35.849 [2024-11-27 08:10:29.826777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.849 [2024-11-27 08:10:29.826846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.849 [2024-11-27 08:10:29.826860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.850 [2024-11-27 08:10:29.826868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.850 [2024-11-27 08:10:29.826875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.850 [2024-11-27 08:10:29.826890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.850 qpair failed and we were unable to recover it. 00:27:35.850 [2024-11-27 08:10:29.836839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.850 [2024-11-27 08:10:29.836894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.850 [2024-11-27 08:10:29.836909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.850 [2024-11-27 08:10:29.836916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.850 [2024-11-27 08:10:29.836922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.850 [2024-11-27 08:10:29.836938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.850 qpair failed and we were unable to recover it. 00:27:35.850 [2024-11-27 08:10:29.846828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.850 [2024-11-27 08:10:29.846886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.850 [2024-11-27 08:10:29.846900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.850 [2024-11-27 08:10:29.846908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.850 [2024-11-27 08:10:29.846914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.850 [2024-11-27 08:10:29.846930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.850 qpair failed and we were unable to recover it. 00:27:35.850 [2024-11-27 08:10:29.856810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.850 [2024-11-27 08:10:29.856865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.850 [2024-11-27 08:10:29.856879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.850 [2024-11-27 08:10:29.856886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.850 [2024-11-27 08:10:29.856896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.850 [2024-11-27 08:10:29.856911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.850 qpair failed and we were unable to recover it. 00:27:35.850 [2024-11-27 08:10:29.866824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.850 [2024-11-27 08:10:29.866882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.850 [2024-11-27 08:10:29.866896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.850 [2024-11-27 08:10:29.866903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.850 [2024-11-27 08:10:29.866910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.850 [2024-11-27 08:10:29.866924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.850 qpair failed and we were unable to recover it. 00:27:35.850 [2024-11-27 08:10:29.876919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.850 [2024-11-27 08:10:29.876987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.850 [2024-11-27 08:10:29.877002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.850 [2024-11-27 08:10:29.877009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.850 [2024-11-27 08:10:29.877016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.850 [2024-11-27 08:10:29.877031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.850 qpair failed and we were unable to recover it. 00:27:35.850 [2024-11-27 08:10:29.887034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.850 [2024-11-27 08:10:29.887106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.850 [2024-11-27 08:10:29.887120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.850 [2024-11-27 08:10:29.887127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.850 [2024-11-27 08:10:29.887133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.850 [2024-11-27 08:10:29.887148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.850 qpair failed and we were unable to recover it. 00:27:35.850 [2024-11-27 08:10:29.896979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.850 [2024-11-27 08:10:29.897038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.850 [2024-11-27 08:10:29.897052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.850 [2024-11-27 08:10:29.897060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.850 [2024-11-27 08:10:29.897067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.850 [2024-11-27 08:10:29.897082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.850 qpair failed and we were unable to recover it. 00:27:35.850 [2024-11-27 08:10:29.906988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.850 [2024-11-27 08:10:29.907048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.850 [2024-11-27 08:10:29.907062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.850 [2024-11-27 08:10:29.907069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.850 [2024-11-27 08:10:29.907076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.850 [2024-11-27 08:10:29.907091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.850 qpair failed and we were unable to recover it. 00:27:35.850 [2024-11-27 08:10:29.917076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.850 [2024-11-27 08:10:29.917179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.850 [2024-11-27 08:10:29.917194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.850 [2024-11-27 08:10:29.917201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.850 [2024-11-27 08:10:29.917207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.850 [2024-11-27 08:10:29.917223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.850 qpair failed and we were unable to recover it. 00:27:35.850 [2024-11-27 08:10:29.926989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.850 [2024-11-27 08:10:29.927047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.850 [2024-11-27 08:10:29.927061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.850 [2024-11-27 08:10:29.927068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.850 [2024-11-27 08:10:29.927075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.850 [2024-11-27 08:10:29.927090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.850 qpair failed and we were unable to recover it. 00:27:35.850 [2024-11-27 08:10:29.937028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.850 [2024-11-27 08:10:29.937086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.850 [2024-11-27 08:10:29.937100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.850 [2024-11-27 08:10:29.937107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.850 [2024-11-27 08:10:29.937114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.850 [2024-11-27 08:10:29.937129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.850 qpair failed and we were unable to recover it. 00:27:35.850 [2024-11-27 08:10:29.947045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.850 [2024-11-27 08:10:29.947111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.850 [2024-11-27 08:10:29.947125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.850 [2024-11-27 08:10:29.947132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.850 [2024-11-27 08:10:29.947138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:35.850 [2024-11-27 08:10:29.947154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.850 qpair failed and we were unable to recover it. 00:27:36.110 [2024-11-27 08:10:29.957119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.110 [2024-11-27 08:10:29.957179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.110 [2024-11-27 08:10:29.957195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.110 [2024-11-27 08:10:29.957203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.110 [2024-11-27 08:10:29.957210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.110 [2024-11-27 08:10:29.957225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.110 qpair failed and we were unable to recover it. 00:27:36.110 [2024-11-27 08:10:29.967108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.110 [2024-11-27 08:10:29.967169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.110 [2024-11-27 08:10:29.967184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.110 [2024-11-27 08:10:29.967191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.110 [2024-11-27 08:10:29.967198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.110 [2024-11-27 08:10:29.967214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.110 qpair failed and we were unable to recover it. 00:27:36.110 [2024-11-27 08:10:29.977258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.110 [2024-11-27 08:10:29.977314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.110 [2024-11-27 08:10:29.977328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.110 [2024-11-27 08:10:29.977335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.110 [2024-11-27 08:10:29.977342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.110 [2024-11-27 08:10:29.977357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.110 qpair failed and we were unable to recover it. 00:27:36.110 [2024-11-27 08:10:29.987320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.110 [2024-11-27 08:10:29.987404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.110 [2024-11-27 08:10:29.987418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.110 [2024-11-27 08:10:29.987429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.110 [2024-11-27 08:10:29.987435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.110 [2024-11-27 08:10:29.987450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.110 qpair failed and we were unable to recover it. 00:27:36.110 [2024-11-27 08:10:29.997211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.110 [2024-11-27 08:10:29.997270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.110 [2024-11-27 08:10:29.997285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.110 [2024-11-27 08:10:29.997292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.110 [2024-11-27 08:10:29.997298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.110 [2024-11-27 08:10:29.997314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.110 qpair failed and we were unable to recover it. 00:27:36.110 [2024-11-27 08:10:30.007320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.110 [2024-11-27 08:10:30.007382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.110 [2024-11-27 08:10:30.007403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.110 [2024-11-27 08:10:30.007412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.110 [2024-11-27 08:10:30.007419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.110 [2024-11-27 08:10:30.007437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.110 qpair failed and we were unable to recover it. 00:27:36.110 [2024-11-27 08:10:30.017273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.110 [2024-11-27 08:10:30.017332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.110 [2024-11-27 08:10:30.017346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.110 [2024-11-27 08:10:30.017353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.110 [2024-11-27 08:10:30.017361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.110 [2024-11-27 08:10:30.017377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.110 qpair failed and we were unable to recover it. 00:27:36.110 [2024-11-27 08:10:30.027368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.110 [2024-11-27 08:10:30.027461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.110 [2024-11-27 08:10:30.027505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.110 [2024-11-27 08:10:30.027522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.110 [2024-11-27 08:10:30.027544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.110 [2024-11-27 08:10:30.027584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.110 qpair failed and we were unable to recover it. 00:27:36.110 [2024-11-27 08:10:30.037368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.110 [2024-11-27 08:10:30.037429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.110 [2024-11-27 08:10:30.037450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.110 [2024-11-27 08:10:30.037459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.110 [2024-11-27 08:10:30.037466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.110 [2024-11-27 08:10:30.037486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.110 qpair failed and we were unable to recover it. 00:27:36.111 [2024-11-27 08:10:30.047428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.111 [2024-11-27 08:10:30.047485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.111 [2024-11-27 08:10:30.047502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.111 [2024-11-27 08:10:30.047510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.111 [2024-11-27 08:10:30.047517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.111 [2024-11-27 08:10:30.047534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.111 qpair failed and we were unable to recover it. 00:27:36.111 [2024-11-27 08:10:30.057463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.111 [2024-11-27 08:10:30.057539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.111 [2024-11-27 08:10:30.057555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.111 [2024-11-27 08:10:30.057562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.111 [2024-11-27 08:10:30.057568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.111 [2024-11-27 08:10:30.057584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.111 qpair failed and we were unable to recover it. 00:27:36.111 [2024-11-27 08:10:30.067531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.111 [2024-11-27 08:10:30.067596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.111 [2024-11-27 08:10:30.067612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.111 [2024-11-27 08:10:30.067619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.111 [2024-11-27 08:10:30.067626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.111 [2024-11-27 08:10:30.067642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.111 qpair failed and we were unable to recover it. 00:27:36.111 [2024-11-27 08:10:30.077539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.111 [2024-11-27 08:10:30.077644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.111 [2024-11-27 08:10:30.077660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.111 [2024-11-27 08:10:30.077667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.111 [2024-11-27 08:10:30.077673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.111 [2024-11-27 08:10:30.077689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.111 qpair failed and we were unable to recover it. 00:27:36.111 [2024-11-27 08:10:30.087538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.111 [2024-11-27 08:10:30.087592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.111 [2024-11-27 08:10:30.087606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.111 [2024-11-27 08:10:30.087613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.111 [2024-11-27 08:10:30.087621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.111 [2024-11-27 08:10:30.087636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.111 qpair failed and we were unable to recover it. 00:27:36.111 [2024-11-27 08:10:30.097517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.111 [2024-11-27 08:10:30.097578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.111 [2024-11-27 08:10:30.097592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.111 [2024-11-27 08:10:30.097600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.111 [2024-11-27 08:10:30.097607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.111 [2024-11-27 08:10:30.097622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.111 qpair failed and we were unable to recover it. 00:27:36.111 [2024-11-27 08:10:30.107653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.111 [2024-11-27 08:10:30.107716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.111 [2024-11-27 08:10:30.107731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.111 [2024-11-27 08:10:30.107738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.111 [2024-11-27 08:10:30.107745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.111 [2024-11-27 08:10:30.107760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.111 qpair failed and we were unable to recover it. 00:27:36.111 [2024-11-27 08:10:30.117674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.111 [2024-11-27 08:10:30.117740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.111 [2024-11-27 08:10:30.117757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.111 [2024-11-27 08:10:30.117768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.111 [2024-11-27 08:10:30.117774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.111 [2024-11-27 08:10:30.117789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.111 qpair failed and we were unable to recover it. 00:27:36.111 [2024-11-27 08:10:30.127693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.111 [2024-11-27 08:10:30.127766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.111 [2024-11-27 08:10:30.127782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.111 [2024-11-27 08:10:30.127789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.111 [2024-11-27 08:10:30.127795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.111 [2024-11-27 08:10:30.127811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.111 qpair failed and we were unable to recover it. 00:27:36.111 [2024-11-27 08:10:30.137723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.111 [2024-11-27 08:10:30.137785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.111 [2024-11-27 08:10:30.137799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.111 [2024-11-27 08:10:30.137806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.111 [2024-11-27 08:10:30.137812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.111 [2024-11-27 08:10:30.137828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.111 qpair failed and we were unable to recover it. 00:27:36.111 [2024-11-27 08:10:30.147718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.111 [2024-11-27 08:10:30.147771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.111 [2024-11-27 08:10:30.147785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.111 [2024-11-27 08:10:30.147792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.111 [2024-11-27 08:10:30.147798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.111 [2024-11-27 08:10:30.147814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.111 qpair failed and we were unable to recover it. 00:27:36.111 [2024-11-27 08:10:30.157752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.111 [2024-11-27 08:10:30.157845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.111 [2024-11-27 08:10:30.157859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.111 [2024-11-27 08:10:30.157866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.111 [2024-11-27 08:10:30.157873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.111 [2024-11-27 08:10:30.157893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.111 qpair failed and we were unable to recover it. 00:27:36.111 [2024-11-27 08:10:30.167757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.111 [2024-11-27 08:10:30.167812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.111 [2024-11-27 08:10:30.167826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.111 [2024-11-27 08:10:30.167833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.111 [2024-11-27 08:10:30.167839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.111 [2024-11-27 08:10:30.167854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.111 qpair failed and we were unable to recover it. 00:27:36.111 [2024-11-27 08:10:30.177797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.112 [2024-11-27 08:10:30.177857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.112 [2024-11-27 08:10:30.177871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.112 [2024-11-27 08:10:30.177878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.112 [2024-11-27 08:10:30.177885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.112 [2024-11-27 08:10:30.177901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.112 qpair failed and we were unable to recover it. 00:27:36.112 [2024-11-27 08:10:30.187820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.112 [2024-11-27 08:10:30.187874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.112 [2024-11-27 08:10:30.187888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.112 [2024-11-27 08:10:30.187894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.112 [2024-11-27 08:10:30.187901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.112 [2024-11-27 08:10:30.187916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.112 qpair failed and we were unable to recover it. 00:27:36.112 [2024-11-27 08:10:30.197842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.112 [2024-11-27 08:10:30.197903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.112 [2024-11-27 08:10:30.197918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.112 [2024-11-27 08:10:30.197925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.112 [2024-11-27 08:10:30.197933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.112 [2024-11-27 08:10:30.197951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.112 qpair failed and we were unable to recover it. 00:27:36.112 [2024-11-27 08:10:30.207868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.112 [2024-11-27 08:10:30.207926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.112 [2024-11-27 08:10:30.207940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.112 [2024-11-27 08:10:30.207951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.112 [2024-11-27 08:10:30.207958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.112 [2024-11-27 08:10:30.207974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.112 qpair failed and we were unable to recover it. 00:27:36.371 [2024-11-27 08:10:30.217911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.371 [2024-11-27 08:10:30.217969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.371 [2024-11-27 08:10:30.217983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.371 [2024-11-27 08:10:30.217991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.371 [2024-11-27 08:10:30.217997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.371 [2024-11-27 08:10:30.218013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.371 qpair failed and we were unable to recover it. 00:27:36.371 [2024-11-27 08:10:30.227858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.371 [2024-11-27 08:10:30.227917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.371 [2024-11-27 08:10:30.227932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.371 [2024-11-27 08:10:30.227939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.371 [2024-11-27 08:10:30.227950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.371 [2024-11-27 08:10:30.227966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.371 qpair failed and we were unable to recover it. 00:27:36.371 [2024-11-27 08:10:30.237950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.371 [2024-11-27 08:10:30.238009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.372 [2024-11-27 08:10:30.238023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.372 [2024-11-27 08:10:30.238030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.372 [2024-11-27 08:10:30.238037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.372 [2024-11-27 08:10:30.238052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.372 qpair failed and we were unable to recover it. 00:27:36.372 [2024-11-27 08:10:30.247982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.372 [2024-11-27 08:10:30.248043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.372 [2024-11-27 08:10:30.248060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.372 [2024-11-27 08:10:30.248067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.372 [2024-11-27 08:10:30.248074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.372 [2024-11-27 08:10:30.248089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.372 qpair failed and we were unable to recover it. 00:27:36.372 [2024-11-27 08:10:30.258059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.372 [2024-11-27 08:10:30.258114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.372 [2024-11-27 08:10:30.258128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.372 [2024-11-27 08:10:30.258135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.372 [2024-11-27 08:10:30.258142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.372 [2024-11-27 08:10:30.258157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.372 qpair failed and we were unable to recover it. 00:27:36.372 [2024-11-27 08:10:30.268045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.372 [2024-11-27 08:10:30.268109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.372 [2024-11-27 08:10:30.268123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.372 [2024-11-27 08:10:30.268132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.372 [2024-11-27 08:10:30.268139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.372 [2024-11-27 08:10:30.268155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.372 qpair failed and we were unable to recover it. 00:27:36.372 [2024-11-27 08:10:30.278098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.372 [2024-11-27 08:10:30.278158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.372 [2024-11-27 08:10:30.278172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.372 [2024-11-27 08:10:30.278179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.372 [2024-11-27 08:10:30.278186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.372 [2024-11-27 08:10:30.278201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.372 qpair failed and we were unable to recover it. 00:27:36.372 [2024-11-27 08:10:30.288085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.372 [2024-11-27 08:10:30.288140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.372 [2024-11-27 08:10:30.288154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.372 [2024-11-27 08:10:30.288161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.372 [2024-11-27 08:10:30.288171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.372 [2024-11-27 08:10:30.288187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.372 qpair failed and we were unable to recover it. 00:27:36.372 [2024-11-27 08:10:30.298134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.372 [2024-11-27 08:10:30.298196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.372 [2024-11-27 08:10:30.298210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.372 [2024-11-27 08:10:30.298218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.372 [2024-11-27 08:10:30.298225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.372 [2024-11-27 08:10:30.298241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.372 qpair failed and we were unable to recover it. 00:27:36.372 [2024-11-27 08:10:30.308154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.372 [2024-11-27 08:10:30.308213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.372 [2024-11-27 08:10:30.308227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.372 [2024-11-27 08:10:30.308234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.372 [2024-11-27 08:10:30.308241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.372 [2024-11-27 08:10:30.308257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.372 qpair failed and we were unable to recover it. 00:27:36.372 [2024-11-27 08:10:30.318224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.372 [2024-11-27 08:10:30.318284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.372 [2024-11-27 08:10:30.318298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.372 [2024-11-27 08:10:30.318306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.372 [2024-11-27 08:10:30.318312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.372 [2024-11-27 08:10:30.318327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.372 qpair failed and we were unable to recover it. 00:27:36.372 [2024-11-27 08:10:30.328201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.372 [2024-11-27 08:10:30.328258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.372 [2024-11-27 08:10:30.328272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.372 [2024-11-27 08:10:30.328279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.372 [2024-11-27 08:10:30.328286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.372 [2024-11-27 08:10:30.328301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.372 qpair failed and we were unable to recover it. 00:27:36.372 [2024-11-27 08:10:30.338272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.372 [2024-11-27 08:10:30.338329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.372 [2024-11-27 08:10:30.338343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.372 [2024-11-27 08:10:30.338349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.372 [2024-11-27 08:10:30.338356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.372 [2024-11-27 08:10:30.338372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.372 qpair failed and we were unable to recover it. 00:27:36.372 [2024-11-27 08:10:30.348293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.372 [2024-11-27 08:10:30.348350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.372 [2024-11-27 08:10:30.348363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.372 [2024-11-27 08:10:30.348370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.372 [2024-11-27 08:10:30.348377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.372 [2024-11-27 08:10:30.348392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.372 qpair failed and we were unable to recover it. 00:27:36.372 [2024-11-27 08:10:30.358306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.372 [2024-11-27 08:10:30.358369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.372 [2024-11-27 08:10:30.358383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.372 [2024-11-27 08:10:30.358390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.372 [2024-11-27 08:10:30.358396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.372 [2024-11-27 08:10:30.358412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.372 qpair failed and we were unable to recover it. 00:27:36.372 [2024-11-27 08:10:30.368299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.373 [2024-11-27 08:10:30.368354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.373 [2024-11-27 08:10:30.368368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.373 [2024-11-27 08:10:30.368375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.373 [2024-11-27 08:10:30.368382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.373 [2024-11-27 08:10:30.368397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.373 qpair failed and we were unable to recover it. 00:27:36.373 [2024-11-27 08:10:30.378365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.373 [2024-11-27 08:10:30.378426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.373 [2024-11-27 08:10:30.378444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.373 [2024-11-27 08:10:30.378452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.373 [2024-11-27 08:10:30.378458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.373 [2024-11-27 08:10:30.378473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.373 qpair failed and we were unable to recover it. 00:27:36.373 [2024-11-27 08:10:30.388388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.373 [2024-11-27 08:10:30.388453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.373 [2024-11-27 08:10:30.388467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.373 [2024-11-27 08:10:30.388475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.373 [2024-11-27 08:10:30.388481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.373 [2024-11-27 08:10:30.388497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.373 qpair failed and we were unable to recover it. 00:27:36.373 [2024-11-27 08:10:30.398401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.373 [2024-11-27 08:10:30.398462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.373 [2024-11-27 08:10:30.398476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.373 [2024-11-27 08:10:30.398483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.373 [2024-11-27 08:10:30.398490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.373 [2024-11-27 08:10:30.398506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.373 qpair failed and we were unable to recover it. 00:27:36.373 [2024-11-27 08:10:30.408446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.373 [2024-11-27 08:10:30.408499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.373 [2024-11-27 08:10:30.408512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.373 [2024-11-27 08:10:30.408519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.373 [2024-11-27 08:10:30.408526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.373 [2024-11-27 08:10:30.408541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.373 qpair failed and we were unable to recover it. 00:27:36.373 [2024-11-27 08:10:30.418412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.373 [2024-11-27 08:10:30.418469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.373 [2024-11-27 08:10:30.418482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.373 [2024-11-27 08:10:30.418490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.373 [2024-11-27 08:10:30.418500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.373 [2024-11-27 08:10:30.418515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.373 qpair failed and we were unable to recover it. 00:27:36.373 [2024-11-27 08:10:30.428509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.373 [2024-11-27 08:10:30.428570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.373 [2024-11-27 08:10:30.428584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.373 [2024-11-27 08:10:30.428592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.373 [2024-11-27 08:10:30.428598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.373 [2024-11-27 08:10:30.428614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.373 qpair failed and we were unable to recover it. 00:27:36.373 [2024-11-27 08:10:30.438539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.373 [2024-11-27 08:10:30.438618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.373 [2024-11-27 08:10:30.438632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.373 [2024-11-27 08:10:30.438639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.373 [2024-11-27 08:10:30.438645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.373 [2024-11-27 08:10:30.438662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.373 qpair failed and we were unable to recover it. 00:27:36.373 [2024-11-27 08:10:30.448576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.373 [2024-11-27 08:10:30.448625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.373 [2024-11-27 08:10:30.448639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.373 [2024-11-27 08:10:30.448646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.373 [2024-11-27 08:10:30.448652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.373 [2024-11-27 08:10:30.448667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.373 qpair failed and we were unable to recover it. 00:27:36.373 [2024-11-27 08:10:30.458579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.373 [2024-11-27 08:10:30.458637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.373 [2024-11-27 08:10:30.458651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.373 [2024-11-27 08:10:30.458658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.373 [2024-11-27 08:10:30.458665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.373 [2024-11-27 08:10:30.458680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.373 qpair failed and we were unable to recover it. 00:27:36.373 [2024-11-27 08:10:30.468612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.373 [2024-11-27 08:10:30.468668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.373 [2024-11-27 08:10:30.468682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.373 [2024-11-27 08:10:30.468689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.373 [2024-11-27 08:10:30.468695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.373 [2024-11-27 08:10:30.468711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.373 qpair failed and we were unable to recover it. 00:27:36.373 [2024-11-27 08:10:30.478663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.373 [2024-11-27 08:10:30.478720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.373 [2024-11-27 08:10:30.478734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.373 [2024-11-27 08:10:30.478742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.373 [2024-11-27 08:10:30.478748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.373 [2024-11-27 08:10:30.478764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.373 qpair failed and we were unable to recover it. 00:27:36.633 [2024-11-27 08:10:30.488674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.633 [2024-11-27 08:10:30.488729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.633 [2024-11-27 08:10:30.488743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.633 [2024-11-27 08:10:30.488750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.633 [2024-11-27 08:10:30.488757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.633 [2024-11-27 08:10:30.488772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.633 qpair failed and we were unable to recover it. 00:27:36.633 [2024-11-27 08:10:30.498656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.633 [2024-11-27 08:10:30.498727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.633 [2024-11-27 08:10:30.498742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.633 [2024-11-27 08:10:30.498749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.633 [2024-11-27 08:10:30.498756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.633 [2024-11-27 08:10:30.498771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.633 qpair failed and we were unable to recover it. 00:27:36.633 [2024-11-27 08:10:30.508741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.633 [2024-11-27 08:10:30.508805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.633 [2024-11-27 08:10:30.508819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.633 [2024-11-27 08:10:30.508826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.633 [2024-11-27 08:10:30.508832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.633 [2024-11-27 08:10:30.508847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.633 qpair failed and we were unable to recover it. 00:27:36.633 [2024-11-27 08:10:30.518762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.633 [2024-11-27 08:10:30.518818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.633 [2024-11-27 08:10:30.518831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.633 [2024-11-27 08:10:30.518839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.633 [2024-11-27 08:10:30.518845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.633 [2024-11-27 08:10:30.518860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.633 qpair failed and we were unable to recover it. 00:27:36.633 [2024-11-27 08:10:30.528787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.633 [2024-11-27 08:10:30.528840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.633 [2024-11-27 08:10:30.528854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.633 [2024-11-27 08:10:30.528861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.633 [2024-11-27 08:10:30.528867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.633 [2024-11-27 08:10:30.528882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.633 qpair failed and we were unable to recover it. 00:27:36.633 [2024-11-27 08:10:30.538832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.633 [2024-11-27 08:10:30.538888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.633 [2024-11-27 08:10:30.538901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.633 [2024-11-27 08:10:30.538908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.633 [2024-11-27 08:10:30.538915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.633 [2024-11-27 08:10:30.538930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.633 qpair failed and we were unable to recover it. 00:27:36.633 [2024-11-27 08:10:30.548856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.633 [2024-11-27 08:10:30.548909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.633 [2024-11-27 08:10:30.548924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.633 [2024-11-27 08:10:30.548933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.633 [2024-11-27 08:10:30.548940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.633 [2024-11-27 08:10:30.548959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.633 qpair failed and we were unable to recover it. 00:27:36.633 [2024-11-27 08:10:30.558889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.633 [2024-11-27 08:10:30.558961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.633 [2024-11-27 08:10:30.558976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.633 [2024-11-27 08:10:30.558983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.633 [2024-11-27 08:10:30.558989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.633 [2024-11-27 08:10:30.559005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.633 qpair failed and we were unable to recover it. 00:27:36.633 [2024-11-27 08:10:30.568904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.633 [2024-11-27 08:10:30.568961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.633 [2024-11-27 08:10:30.568975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.633 [2024-11-27 08:10:30.568983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.633 [2024-11-27 08:10:30.568989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.633 [2024-11-27 08:10:30.569004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.633 qpair failed and we were unable to recover it. 00:27:36.633 [2024-11-27 08:10:30.578963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.633 [2024-11-27 08:10:30.579038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.633 [2024-11-27 08:10:30.579053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.633 [2024-11-27 08:10:30.579060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.633 [2024-11-27 08:10:30.579066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.633 [2024-11-27 08:10:30.579082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.633 qpair failed and we were unable to recover it. 00:27:36.634 [2024-11-27 08:10:30.588965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.634 [2024-11-27 08:10:30.589022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.634 [2024-11-27 08:10:30.589036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.634 [2024-11-27 08:10:30.589043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.634 [2024-11-27 08:10:30.589050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.634 [2024-11-27 08:10:30.589070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.634 qpair failed and we were unable to recover it. 00:27:36.634 [2024-11-27 08:10:30.598981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.634 [2024-11-27 08:10:30.599040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.634 [2024-11-27 08:10:30.599054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.634 [2024-11-27 08:10:30.599061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.634 [2024-11-27 08:10:30.599068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.634 [2024-11-27 08:10:30.599083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.634 qpair failed and we were unable to recover it. 00:27:36.634 [2024-11-27 08:10:30.609017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.634 [2024-11-27 08:10:30.609074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.634 [2024-11-27 08:10:30.609088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.634 [2024-11-27 08:10:30.609095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.634 [2024-11-27 08:10:30.609101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.634 [2024-11-27 08:10:30.609117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.634 qpair failed and we were unable to recover it. 00:27:36.634 [2024-11-27 08:10:30.619052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.634 [2024-11-27 08:10:30.619121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.634 [2024-11-27 08:10:30.619135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.634 [2024-11-27 08:10:30.619142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.634 [2024-11-27 08:10:30.619149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.634 [2024-11-27 08:10:30.619164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.634 qpair failed and we were unable to recover it. 00:27:36.634 [2024-11-27 08:10:30.629071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.634 [2024-11-27 08:10:30.629151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.634 [2024-11-27 08:10:30.629167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.634 [2024-11-27 08:10:30.629174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.634 [2024-11-27 08:10:30.629180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.634 [2024-11-27 08:10:30.629196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.634 qpair failed and we were unable to recover it. 00:27:36.634 [2024-11-27 08:10:30.639063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.634 [2024-11-27 08:10:30.639135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.634 [2024-11-27 08:10:30.639152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.634 [2024-11-27 08:10:30.639160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.634 [2024-11-27 08:10:30.639167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.634 [2024-11-27 08:10:30.639185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.634 qpair failed and we were unable to recover it. 00:27:36.634 [2024-11-27 08:10:30.649127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.634 [2024-11-27 08:10:30.649184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.634 [2024-11-27 08:10:30.649198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.634 [2024-11-27 08:10:30.649206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.634 [2024-11-27 08:10:30.649212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.634 [2024-11-27 08:10:30.649226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.634 qpair failed and we were unable to recover it. 00:27:36.634 [2024-11-27 08:10:30.659175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.634 [2024-11-27 08:10:30.659233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.634 [2024-11-27 08:10:30.659247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.634 [2024-11-27 08:10:30.659254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.634 [2024-11-27 08:10:30.659261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.634 [2024-11-27 08:10:30.659276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.634 qpair failed and we were unable to recover it. 00:27:36.634 [2024-11-27 08:10:30.669213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.634 [2024-11-27 08:10:30.669282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.634 [2024-11-27 08:10:30.669296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.634 [2024-11-27 08:10:30.669303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.634 [2024-11-27 08:10:30.669309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.634 [2024-11-27 08:10:30.669325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.634 qpair failed and we were unable to recover it. 00:27:36.634 [2024-11-27 08:10:30.679217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.634 [2024-11-27 08:10:30.679271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.634 [2024-11-27 08:10:30.679287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.634 [2024-11-27 08:10:30.679295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.634 [2024-11-27 08:10:30.679302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.634 [2024-11-27 08:10:30.679318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.634 qpair failed and we were unable to recover it. 00:27:36.634 [2024-11-27 08:10:30.689249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.634 [2024-11-27 08:10:30.689304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.634 [2024-11-27 08:10:30.689318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.634 [2024-11-27 08:10:30.689325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.634 [2024-11-27 08:10:30.689332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.634 [2024-11-27 08:10:30.689347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.634 qpair failed and we were unable to recover it. 00:27:36.634 [2024-11-27 08:10:30.699272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.634 [2024-11-27 08:10:30.699343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.634 [2024-11-27 08:10:30.699358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.634 [2024-11-27 08:10:30.699365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.634 [2024-11-27 08:10:30.699371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.634 [2024-11-27 08:10:30.699387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.634 qpair failed and we were unable to recover it. 00:27:36.634 [2024-11-27 08:10:30.709348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.634 [2024-11-27 08:10:30.709402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.634 [2024-11-27 08:10:30.709417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.634 [2024-11-27 08:10:30.709424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.634 [2024-11-27 08:10:30.709431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.634 [2024-11-27 08:10:30.709446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.634 qpair failed and we were unable to recover it. 00:27:36.635 [2024-11-27 08:10:30.719332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.635 [2024-11-27 08:10:30.719393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.635 [2024-11-27 08:10:30.719407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.635 [2024-11-27 08:10:30.719415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.635 [2024-11-27 08:10:30.719422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.635 [2024-11-27 08:10:30.719440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.635 qpair failed and we were unable to recover it. 00:27:36.635 [2024-11-27 08:10:30.729322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.635 [2024-11-27 08:10:30.729384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.635 [2024-11-27 08:10:30.729400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.635 [2024-11-27 08:10:30.729408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.635 [2024-11-27 08:10:30.729415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.635 [2024-11-27 08:10:30.729431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.635 qpair failed and we were unable to recover it. 00:27:36.635 [2024-11-27 08:10:30.739413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.635 [2024-11-27 08:10:30.739469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.635 [2024-11-27 08:10:30.739483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.635 [2024-11-27 08:10:30.739490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.635 [2024-11-27 08:10:30.739497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.635 [2024-11-27 08:10:30.739513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.635 qpair failed and we were unable to recover it. 00:27:36.894 [2024-11-27 08:10:30.749420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.894 [2024-11-27 08:10:30.749479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.894 [2024-11-27 08:10:30.749493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.894 [2024-11-27 08:10:30.749501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.894 [2024-11-27 08:10:30.749507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.894 [2024-11-27 08:10:30.749523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.894 qpair failed and we were unable to recover it. 00:27:36.894 [2024-11-27 08:10:30.759458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.894 [2024-11-27 08:10:30.759516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.894 [2024-11-27 08:10:30.759530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.894 [2024-11-27 08:10:30.759538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.894 [2024-11-27 08:10:30.759544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.894 [2024-11-27 08:10:30.759560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.894 qpair failed and we were unable to recover it. 00:27:36.894 [2024-11-27 08:10:30.769473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.894 [2024-11-27 08:10:30.769529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.894 [2024-11-27 08:10:30.769542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.894 [2024-11-27 08:10:30.769550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.894 [2024-11-27 08:10:30.769557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.894 [2024-11-27 08:10:30.769572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.894 qpair failed and we were unable to recover it. 00:27:36.894 [2024-11-27 08:10:30.779522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.894 [2024-11-27 08:10:30.779603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.894 [2024-11-27 08:10:30.779617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.894 [2024-11-27 08:10:30.779624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.895 [2024-11-27 08:10:30.779630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.895 [2024-11-27 08:10:30.779645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.895 qpair failed and we were unable to recover it. 00:27:36.895 [2024-11-27 08:10:30.789541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.895 [2024-11-27 08:10:30.789634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.895 [2024-11-27 08:10:30.789649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.895 [2024-11-27 08:10:30.789656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.895 [2024-11-27 08:10:30.789662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.895 [2024-11-27 08:10:30.789677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.895 qpair failed and we were unable to recover it. 00:27:36.895 [2024-11-27 08:10:30.799556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.895 [2024-11-27 08:10:30.799609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.895 [2024-11-27 08:10:30.799623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.895 [2024-11-27 08:10:30.799630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.895 [2024-11-27 08:10:30.799637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.895 [2024-11-27 08:10:30.799653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.895 qpair failed and we were unable to recover it. 00:27:36.895 [2024-11-27 08:10:30.809599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.895 [2024-11-27 08:10:30.809655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.895 [2024-11-27 08:10:30.809672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.895 [2024-11-27 08:10:30.809680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.895 [2024-11-27 08:10:30.809687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.895 [2024-11-27 08:10:30.809702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.895 qpair failed and we were unable to recover it. 00:27:36.895 [2024-11-27 08:10:30.819631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.895 [2024-11-27 08:10:30.819740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.895 [2024-11-27 08:10:30.819754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.895 [2024-11-27 08:10:30.819761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.895 [2024-11-27 08:10:30.819769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.895 [2024-11-27 08:10:30.819785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.895 qpair failed and we were unable to recover it. 00:27:36.895 [2024-11-27 08:10:30.829660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.895 [2024-11-27 08:10:30.829720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.895 [2024-11-27 08:10:30.829735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.895 [2024-11-27 08:10:30.829743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.895 [2024-11-27 08:10:30.829749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.895 [2024-11-27 08:10:30.829764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.895 qpair failed and we were unable to recover it. 00:27:36.895 [2024-11-27 08:10:30.839753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.895 [2024-11-27 08:10:30.839814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.895 [2024-11-27 08:10:30.839827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.895 [2024-11-27 08:10:30.839835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.895 [2024-11-27 08:10:30.839841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.895 [2024-11-27 08:10:30.839857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.895 qpair failed and we were unable to recover it. 00:27:36.895 [2024-11-27 08:10:30.849720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.895 [2024-11-27 08:10:30.849777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.895 [2024-11-27 08:10:30.849792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.895 [2024-11-27 08:10:30.849799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.895 [2024-11-27 08:10:30.849808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.895 [2024-11-27 08:10:30.849825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.895 qpair failed and we were unable to recover it. 00:27:36.895 [2024-11-27 08:10:30.859752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.895 [2024-11-27 08:10:30.859811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.895 [2024-11-27 08:10:30.859826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.895 [2024-11-27 08:10:30.859833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.895 [2024-11-27 08:10:30.859840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.895 [2024-11-27 08:10:30.859855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.895 qpair failed and we were unable to recover it. 00:27:36.895 [2024-11-27 08:10:30.869767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.895 [2024-11-27 08:10:30.869824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.895 [2024-11-27 08:10:30.869840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.895 [2024-11-27 08:10:30.869849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.895 [2024-11-27 08:10:30.869857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.895 [2024-11-27 08:10:30.869872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.895 qpair failed and we were unable to recover it. 00:27:36.895 [2024-11-27 08:10:30.879797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.895 [2024-11-27 08:10:30.879854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.895 [2024-11-27 08:10:30.879869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.895 [2024-11-27 08:10:30.879876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.895 [2024-11-27 08:10:30.879883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.895 [2024-11-27 08:10:30.879898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.895 qpair failed and we were unable to recover it. 00:27:36.895 [2024-11-27 08:10:30.889826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.895 [2024-11-27 08:10:30.889883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.895 [2024-11-27 08:10:30.889897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.895 [2024-11-27 08:10:30.889904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.895 [2024-11-27 08:10:30.889911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.895 [2024-11-27 08:10:30.889926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.895 qpair failed and we were unable to recover it. 00:27:36.895 [2024-11-27 08:10:30.899867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.895 [2024-11-27 08:10:30.899925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.895 [2024-11-27 08:10:30.899939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.895 [2024-11-27 08:10:30.899950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.895 [2024-11-27 08:10:30.899958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.895 [2024-11-27 08:10:30.899973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.895 qpair failed and we were unable to recover it. 00:27:36.895 [2024-11-27 08:10:30.909899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.895 [2024-11-27 08:10:30.909969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.895 [2024-11-27 08:10:30.909983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.895 [2024-11-27 08:10:30.909990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.896 [2024-11-27 08:10:30.909996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.896 [2024-11-27 08:10:30.910013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.896 qpair failed and we were unable to recover it. 00:27:36.896 [2024-11-27 08:10:30.919927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.896 [2024-11-27 08:10:30.919990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.896 [2024-11-27 08:10:30.920004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.896 [2024-11-27 08:10:30.920012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.896 [2024-11-27 08:10:30.920018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.896 [2024-11-27 08:10:30.920034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.896 qpair failed and we were unable to recover it. 00:27:36.896 [2024-11-27 08:10:30.929964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.896 [2024-11-27 08:10:30.930036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.896 [2024-11-27 08:10:30.930050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.896 [2024-11-27 08:10:30.930057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.896 [2024-11-27 08:10:30.930064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.896 [2024-11-27 08:10:30.930079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.896 qpair failed and we were unable to recover it. 00:27:36.896 [2024-11-27 08:10:30.939994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.896 [2024-11-27 08:10:30.940053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.896 [2024-11-27 08:10:30.940069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.896 [2024-11-27 08:10:30.940076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.896 [2024-11-27 08:10:30.940083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.896 [2024-11-27 08:10:30.940098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.896 qpair failed and we were unable to recover it. 00:27:36.896 [2024-11-27 08:10:30.950013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.896 [2024-11-27 08:10:30.950072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.896 [2024-11-27 08:10:30.950086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.896 [2024-11-27 08:10:30.950093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.896 [2024-11-27 08:10:30.950100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.896 [2024-11-27 08:10:30.950115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.896 qpair failed and we were unable to recover it. 00:27:36.896 [2024-11-27 08:10:30.960050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.896 [2024-11-27 08:10:30.960107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.896 [2024-11-27 08:10:30.960121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.896 [2024-11-27 08:10:30.960129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.896 [2024-11-27 08:10:30.960135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.896 [2024-11-27 08:10:30.960150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.896 qpair failed and we were unable to recover it. 00:27:36.896 [2024-11-27 08:10:30.970074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.896 [2024-11-27 08:10:30.970134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.896 [2024-11-27 08:10:30.970148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.896 [2024-11-27 08:10:30.970156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.896 [2024-11-27 08:10:30.970162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.896 [2024-11-27 08:10:30.970178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.896 qpair failed and we were unable to recover it. 00:27:36.896 [2024-11-27 08:10:30.980112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.896 [2024-11-27 08:10:30.980169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.896 [2024-11-27 08:10:30.980183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.896 [2024-11-27 08:10:30.980190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.896 [2024-11-27 08:10:30.980200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.896 [2024-11-27 08:10:30.980215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.896 qpair failed and we were unable to recover it. 00:27:36.896 [2024-11-27 08:10:30.990145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.896 [2024-11-27 08:10:30.990200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.896 [2024-11-27 08:10:30.990214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.896 [2024-11-27 08:10:30.990221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.896 [2024-11-27 08:10:30.990228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.896 [2024-11-27 08:10:30.990243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.896 qpair failed and we were unable to recover it. 00:27:36.896 [2024-11-27 08:10:31.000161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.896 [2024-11-27 08:10:31.000213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.896 [2024-11-27 08:10:31.000228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.896 [2024-11-27 08:10:31.000235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.896 [2024-11-27 08:10:31.000241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:36.896 [2024-11-27 08:10:31.000257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.896 qpair failed and we were unable to recover it. 00:27:37.155 [2024-11-27 08:10:31.010197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.155 [2024-11-27 08:10:31.010255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.155 [2024-11-27 08:10:31.010269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.155 [2024-11-27 08:10:31.010277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.155 [2024-11-27 08:10:31.010284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c8000b90 00:27:37.155 [2024-11-27 08:10:31.010299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.155 qpair failed and we were unable to recover it. 00:27:37.155 [2024-11-27 08:10:31.020240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.155 [2024-11-27 08:10:31.020321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.155 [2024-11-27 08:10:31.020350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.155 [2024-11-27 08:10:31.020362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.155 [2024-11-27 08:10:31.020371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa19be0 00:27:37.155 [2024-11-27 08:10:31.020396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.155 qpair failed and we were unable to recover it. 00:27:37.155 [2024-11-27 08:10:31.030301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.155 [2024-11-27 08:10:31.030357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.155 [2024-11-27 08:10:31.030373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.155 [2024-11-27 08:10:31.030381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.155 [2024-11-27 08:10:31.030388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa19be0 00:27:37.155 [2024-11-27 08:10:31.030404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.155 qpair failed and we were unable to recover it. 00:27:37.155 [2024-11-27 08:10:31.040285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.155 [2024-11-27 08:10:31.040355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.155 [2024-11-27 08:10:31.040383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.155 [2024-11-27 08:10:31.040395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.155 [2024-11-27 08:10:31.040404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:27:37.155 [2024-11-27 08:10:31.040429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:37.155 qpair failed and we were unable to recover it. 00:27:37.155 [2024-11-27 08:10:31.050254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.155 [2024-11-27 08:10:31.050318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.155 [2024-11-27 08:10:31.050334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.155 [2024-11-27 08:10:31.050341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.155 [2024-11-27 08:10:31.050348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:27:37.155 [2024-11-27 08:10:31.050364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:37.155 qpair failed and we were unable to recover it. 00:27:37.155 [2024-11-27 08:10:31.050519] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:37.155 A controller has encountered a failure and is being reset. 00:27:37.155 [2024-11-27 08:10:31.060364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.155 [2024-11-27 08:10:31.060453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.155 [2024-11-27 08:10:31.060481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.155 [2024-11-27 08:10:31.060493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.155 [2024-11-27 08:10:31.060503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39d0000b90 00:27:37.155 [2024-11-27 08:10:31.060529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.155 qpair failed and we were unable to recover it. 00:27:37.155 [2024-11-27 08:10:31.070399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.155 [2024-11-27 08:10:31.070457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.155 [2024-11-27 08:10:31.070473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.155 [2024-11-27 08:10:31.070480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.155 [2024-11-27 08:10:31.070487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39d0000b90 00:27:37.155 [2024-11-27 08:10:31.070504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.155 qpair failed and we were unable to recover it. 00:27:37.155 [2024-11-27 08:10:31.070621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa27b20 (9): Bad file descriptor 00:27:37.155 Controller properly reset. 00:27:37.155 Initializing NVMe Controllers 00:27:37.155 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:37.155 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:37.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:37.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:37.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:37.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:37.155 Initialization complete. Launching workers. 00:27:37.155 Starting thread on core 1 00:27:37.155 Starting thread on core 2 00:27:37.155 Starting thread on core 3 00:27:37.155 Starting thread on core 0 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:37.155 00:27:37.155 real 0m10.844s 00:27:37.155 user 0m18.952s 00:27:37.155 sys 0m4.453s 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:37.155 ************************************ 00:27:37.155 END TEST nvmf_target_disconnect_tc2 00:27:37.155 ************************************ 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:37.155 rmmod nvme_tcp 00:27:37.155 rmmod nvme_fabrics 00:27:37.155 rmmod nvme_keyring 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2608310 ']' 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2608310 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2608310 ']' 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2608310 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:37.155 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2608310 00:27:37.415 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:37.415 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:37.415 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2608310' 00:27:37.415 killing process with pid 2608310 00:27:37.415 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2608310 00:27:37.415 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2608310 00:27:37.415 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:37.415 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:37.415 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:37.415 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:37.415 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:37.415 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:37.415 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:37.415 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:37.415 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:37.415 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.415 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.415 08:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.947 08:10:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:39.947 00:27:39.947 real 0m19.080s 00:27:39.947 user 0m46.719s 00:27:39.947 sys 0m8.972s 00:27:39.947 08:10:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:39.947 08:10:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:39.947 ************************************ 00:27:39.947 END TEST nvmf_target_disconnect 00:27:39.947 ************************************ 00:27:39.947 08:10:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:39.947 00:27:39.947 real 5m42.703s 00:27:39.947 user 10m25.151s 00:27:39.947 sys 1m51.447s 00:27:39.947 08:10:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:39.948 08:10:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.948 ************************************ 00:27:39.948 END TEST nvmf_host 00:27:39.948 ************************************ 00:27:39.948 08:10:33 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:39.948 08:10:33 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:39.948 08:10:33 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:39.948 08:10:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:39.948 08:10:33 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:39.948 08:10:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:39.948 ************************************ 00:27:39.948 START TEST nvmf_target_core_interrupt_mode 00:27:39.948 ************************************ 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:39.948 * Looking for test storage... 00:27:39.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:39.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.948 --rc genhtml_branch_coverage=1 00:27:39.948 --rc genhtml_function_coverage=1 00:27:39.948 --rc genhtml_legend=1 00:27:39.948 --rc geninfo_all_blocks=1 00:27:39.948 --rc geninfo_unexecuted_blocks=1 00:27:39.948 00:27:39.948 ' 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:39.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.948 --rc genhtml_branch_coverage=1 00:27:39.948 --rc genhtml_function_coverage=1 00:27:39.948 --rc genhtml_legend=1 00:27:39.948 --rc geninfo_all_blocks=1 00:27:39.948 --rc geninfo_unexecuted_blocks=1 00:27:39.948 00:27:39.948 ' 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:39.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.948 --rc genhtml_branch_coverage=1 00:27:39.948 --rc genhtml_function_coverage=1 00:27:39.948 --rc genhtml_legend=1 00:27:39.948 --rc geninfo_all_blocks=1 00:27:39.948 --rc geninfo_unexecuted_blocks=1 00:27:39.948 00:27:39.948 ' 00:27:39.948 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:39.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.948 --rc genhtml_branch_coverage=1 00:27:39.948 --rc genhtml_function_coverage=1 00:27:39.948 --rc genhtml_legend=1 00:27:39.948 --rc geninfo_all_blocks=1 00:27:39.948 --rc geninfo_unexecuted_blocks=1 00:27:39.949 00:27:39.949 ' 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:39.949 ************************************ 00:27:39.949 START TEST nvmf_abort 00:27:39.949 ************************************ 00:27:39.949 08:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:39.949 * Looking for test storage... 00:27:39.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:39.949 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:39.949 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:27:39.949 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:40.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.209 --rc genhtml_branch_coverage=1 00:27:40.209 --rc genhtml_function_coverage=1 00:27:40.209 --rc genhtml_legend=1 00:27:40.209 --rc geninfo_all_blocks=1 00:27:40.209 --rc geninfo_unexecuted_blocks=1 00:27:40.209 00:27:40.209 ' 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:40.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.209 --rc genhtml_branch_coverage=1 00:27:40.209 --rc genhtml_function_coverage=1 00:27:40.209 --rc genhtml_legend=1 00:27:40.209 --rc geninfo_all_blocks=1 00:27:40.209 --rc geninfo_unexecuted_blocks=1 00:27:40.209 00:27:40.209 ' 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:40.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.209 --rc genhtml_branch_coverage=1 00:27:40.209 --rc genhtml_function_coverage=1 00:27:40.209 --rc genhtml_legend=1 00:27:40.209 --rc geninfo_all_blocks=1 00:27:40.209 --rc geninfo_unexecuted_blocks=1 00:27:40.209 00:27:40.209 ' 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:40.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.209 --rc genhtml_branch_coverage=1 00:27:40.209 --rc genhtml_function_coverage=1 00:27:40.209 --rc genhtml_legend=1 00:27:40.209 --rc geninfo_all_blocks=1 00:27:40.209 --rc geninfo_unexecuted_blocks=1 00:27:40.209 00:27:40.209 ' 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:40.209 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:40.210 08:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:45.478 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:45.479 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:45.479 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:45.479 Found net devices under 0000:86:00.0: cvl_0_0 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:45.479 Found net devices under 0000:86:00.1: cvl_0_1 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:45.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:27:45.479 00:27:45.479 --- 10.0.0.2 ping statistics --- 00:27:45.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.479 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:27:45.479 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:27:45.480 00:27:45.480 --- 10.0.0.1 ping statistics --- 00:27:45.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.480 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:27:45.480 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.480 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:45.480 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:45.480 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.480 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:45.480 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:45.480 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.480 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:45.739 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:45.739 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:45.739 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:45.739 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:45.739 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:45.739 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:45.739 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2612844 00:27:45.739 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2612844 00:27:45.739 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2612844 ']' 00:27:45.739 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.739 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:45.739 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.739 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:45.739 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:45.739 [2024-11-27 08:10:39.659488] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:45.739 [2024-11-27 08:10:39.660470] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:27:45.739 [2024-11-27 08:10:39.660506] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.739 [2024-11-27 08:10:39.724121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:45.739 [2024-11-27 08:10:39.764760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.739 [2024-11-27 08:10:39.764795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.739 [2024-11-27 08:10:39.764804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.739 [2024-11-27 08:10:39.764810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.739 [2024-11-27 08:10:39.764815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.739 [2024-11-27 08:10:39.766128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:45.739 [2024-11-27 08:10:39.766214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:45.739 [2024-11-27 08:10:39.766215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.739 [2024-11-27 08:10:39.834716] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:45.739 [2024-11-27 08:10:39.834751] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:45.739 [2024-11-27 08:10:39.835006] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:45.739 [2024-11-27 08:10:39.835067] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:45.998 [2024-11-27 08:10:39.914877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:45.998 Malloc0 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:45.998 Delay0 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:45.998 [2024-11-27 08:10:39.994845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.998 08:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:45.998 08:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.998 08:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:46.257 [2024-11-27 08:10:40.109697] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:48.790 Initializing NVMe Controllers 00:27:48.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:48.790 controller IO queue size 128 less than required 00:27:48.790 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:48.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:48.790 Initialization complete. Launching workers. 00:27:48.790 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36660 00:27:48.790 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36717, failed to submit 66 00:27:48.791 success 36660, unsuccessful 57, failed 0 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:48.791 rmmod nvme_tcp 00:27:48.791 rmmod nvme_fabrics 00:27:48.791 rmmod nvme_keyring 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2612844 ']' 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2612844 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2612844 ']' 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2612844 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2612844 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2612844' 00:27:48.791 killing process with pid 2612844 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2612844 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2612844 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.791 08:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.699 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:50.699 00:27:50.699 real 0m10.751s 00:27:50.699 user 0m10.564s 00:27:50.699 sys 0m5.428s 00:27:50.699 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:50.699 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:50.699 ************************************ 00:27:50.699 END TEST nvmf_abort 00:27:50.699 ************************************ 00:27:50.699 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:50.699 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:50.699 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:50.699 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:50.699 ************************************ 00:27:50.699 START TEST nvmf_ns_hotplug_stress 00:27:50.699 ************************************ 00:27:50.699 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:50.958 * Looking for test storage... 00:27:50.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:50.958 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:50.958 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:27:50.958 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:50.958 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:50.958 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:50.958 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:50.958 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:50.958 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:50.958 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:50.958 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:50.958 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:50.958 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:50.958 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:50.958 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:50.958 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:50.958 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:50.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.959 --rc genhtml_branch_coverage=1 00:27:50.959 --rc genhtml_function_coverage=1 00:27:50.959 --rc genhtml_legend=1 00:27:50.959 --rc geninfo_all_blocks=1 00:27:50.959 --rc geninfo_unexecuted_blocks=1 00:27:50.959 00:27:50.959 ' 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:50.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.959 --rc genhtml_branch_coverage=1 00:27:50.959 --rc genhtml_function_coverage=1 00:27:50.959 --rc genhtml_legend=1 00:27:50.959 --rc geninfo_all_blocks=1 00:27:50.959 --rc geninfo_unexecuted_blocks=1 00:27:50.959 00:27:50.959 ' 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:50.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.959 --rc genhtml_branch_coverage=1 00:27:50.959 --rc genhtml_function_coverage=1 00:27:50.959 --rc genhtml_legend=1 00:27:50.959 --rc geninfo_all_blocks=1 00:27:50.959 --rc geninfo_unexecuted_blocks=1 00:27:50.959 00:27:50.959 ' 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:50.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.959 --rc genhtml_branch_coverage=1 00:27:50.959 --rc genhtml_function_coverage=1 00:27:50.959 --rc genhtml_legend=1 00:27:50.959 --rc geninfo_all_blocks=1 00:27:50.959 --rc geninfo_unexecuted_blocks=1 00:27:50.959 00:27:50.959 ' 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.959 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:50.960 08:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:56.226 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:56.226 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:56.226 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:56.227 Found net devices under 0000:86:00.0: cvl_0_0 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:56.227 Found net devices under 0000:86:00.1: cvl_0_1 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:56.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:56.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:27:56.227 00:27:56.227 --- 10.0.0.2 ping statistics --- 00:27:56.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.227 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:56.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:56.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:27:56.227 00:27:56.227 --- 10.0.0.1 ping statistics --- 00:27:56.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.227 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2616836 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2616836 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2616836 ']' 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:56.227 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:56.487 [2024-11-27 08:10:50.377831] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:56.487 [2024-11-27 08:10:50.378783] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:27:56.487 [2024-11-27 08:10:50.378816] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.487 [2024-11-27 08:10:50.447108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:56.487 [2024-11-27 08:10:50.487547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.487 [2024-11-27 08:10:50.487588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.487 [2024-11-27 08:10:50.487595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:56.487 [2024-11-27 08:10:50.487605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:56.487 [2024-11-27 08:10:50.487610] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.487 [2024-11-27 08:10:50.488973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.487 [2024-11-27 08:10:50.489042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:56.487 [2024-11-27 08:10:50.489044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.487 [2024-11-27 08:10:50.558130] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:56.487 [2024-11-27 08:10:50.558163] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:56.487 [2024-11-27 08:10:50.558431] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:56.487 [2024-11-27 08:10:50.558492] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:56.487 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:56.487 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:56.487 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:56.487 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:56.487 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:56.746 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:56.746 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:56.746 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:56.746 [2024-11-27 08:10:50.793552] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:56.746 08:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:57.004 08:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:57.262 [2024-11-27 08:10:51.173837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.262 08:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:57.521 08:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:57.521 Malloc0 00:27:57.521 08:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:57.778 Delay0 00:27:57.778 08:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.036 08:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:58.036 NULL1 00:27:58.294 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:58.294 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:58.294 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2617100 00:27:58.294 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:27:58.294 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.552 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.810 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:58.810 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:59.067 true 00:27:59.067 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:27:59.067 08:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.325 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.325 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:59.325 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:59.584 true 00:27:59.584 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:27:59.584 08:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.957 Read completed with error (sct=0, sc=11) 00:28:00.957 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.957 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:00.957 08:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:01.214 true 00:28:01.214 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:01.214 08:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.146 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.146 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:02.146 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:02.403 true 00:28:02.403 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:02.403 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.661 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.918 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:02.918 08:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:02.918 true 00:28:02.918 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:02.918 08:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.302 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.303 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:04.303 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:04.560 true 00:28:04.560 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:04.560 08:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.494 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.494 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:05.494 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:05.752 true 00:28:05.752 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:05.752 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.010 08:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.267 08:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:06.267 08:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:06.267 true 00:28:06.524 08:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:06.525 08:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.458 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.715 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:07.716 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:07.973 true 00:28:07.973 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:07.973 08:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.907 08:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.907 08:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:08.907 08:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:09.165 true 00:28:09.165 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:09.165 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.423 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:09.423 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:09.423 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:09.681 true 00:28:09.681 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:09.681 08:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:11.055 08:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:11.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:11.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:11.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:11.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:11.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:11.055 08:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:11.055 08:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:11.313 true 00:28:11.313 08:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:11.313 08:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.246 08:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.246 08:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:12.246 08:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:12.504 true 00:28:12.504 08:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:12.504 08:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.762 08:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.762 08:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:12.762 08:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:13.021 true 00:28:13.021 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:13.021 08:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.025 08:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:14.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.294 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.294 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.294 08:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:14.294 08:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:14.551 true 00:28:14.551 08:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:14.551 08:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:15.485 08:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.485 08:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:15.485 08:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:15.743 true 00:28:15.743 08:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:15.743 08:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.001 08:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:16.259 08:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:16.259 08:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:16.259 true 00:28:16.259 08:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:16.259 08:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.633 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:17.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.633 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:17.633 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:17.891 true 00:28:17.891 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:17.891 08:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.823 08:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:18.823 08:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:18.823 08:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:19.081 true 00:28:19.081 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:19.081 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.339 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:19.339 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:19.339 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:19.597 true 00:28:19.597 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:19.597 08:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.972 08:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:20.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.972 08:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:20.972 08:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:20.972 true 00:28:21.229 08:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:21.229 08:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.795 08:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.053 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:22.053 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:22.310 true 00:28:22.310 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:22.310 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.568 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.826 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:22.826 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:22.826 true 00:28:22.826 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:22.826 08:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.196 08:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.196 08:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:24.196 08:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:24.453 true 00:28:24.453 08:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:24.453 08:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:25.385 08:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:25.385 08:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:25.385 08:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:25.642 true 00:28:25.642 08:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:25.642 08:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.898 08:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:26.156 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:26.156 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:26.156 true 00:28:26.156 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:26.156 08:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:27.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.526 08:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:27.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.526 08:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:27.526 08:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:27.783 true 00:28:27.783 08:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:27.783 08:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.714 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.714 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:28.714 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:28.714 Initializing NVMe Controllers 00:28:28.714 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:28.714 Controller IO queue size 128, less than required. 00:28:28.714 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:28.714 Controller IO queue size 128, less than required. 00:28:28.714 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:28.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:28.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:28.714 Initialization complete. Launching workers. 00:28:28.714 ======================================================== 00:28:28.714 Latency(us) 00:28:28.714 Device Information : IOPS MiB/s Average min max 00:28:28.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2162.47 1.06 41228.13 2687.49 1012821.59 00:28:28.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17860.15 8.72 7147.88 1579.45 325106.40 00:28:28.714 ======================================================== 00:28:28.714 Total : 20022.63 9.78 10828.59 1579.45 1012821.59 00:28:28.714 00:28:28.971 true 00:28:28.971 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2617100 00:28:28.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2617100) - No such process 00:28:28.971 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2617100 00:28:28.971 08:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.228 08:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:29.486 08:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:29.486 08:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:29.486 08:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:29.486 08:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:29.486 08:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:29.486 null0 00:28:29.486 08:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:29.486 08:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:29.486 08:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:29.743 null1 00:28:29.743 08:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:29.743 08:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:29.743 08:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:29.999 null2 00:28:29.999 08:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:29.999 08:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:29.999 08:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:30.257 null3 00:28:30.257 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:30.257 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:30.257 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:30.257 null4 00:28:30.257 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:30.257 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:30.257 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:30.620 null5 00:28:30.620 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:30.620 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:30.620 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:30.620 null6 00:28:30.620 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:30.620 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:30.620 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:30.923 null7 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.923 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2622431 2622433 2622434 2622436 2622438 2622440 2622442 2622444 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.924 08:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:31.182 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:31.182 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:31.182 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.182 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:31.182 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:31.182 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:31.182 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:31.182 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:31.439 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.439 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.439 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:31.439 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.439 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.439 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:31.439 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.439 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.439 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:31.439 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.439 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:31.440 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.697 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:31.954 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:31.954 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.954 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:31.954 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:31.954 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:31.954 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:31.954 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:31.954 08:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:32.211 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.211 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.211 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:32.211 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.211 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.211 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:32.211 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.211 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.211 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:32.211 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.211 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.211 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:32.211 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.211 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.211 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:32.212 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.212 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.212 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:32.212 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.212 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.212 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:32.212 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.212 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.212 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.506 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.507 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:32.507 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.507 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.507 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:32.507 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.507 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.507 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:32.507 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.507 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.507 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:32.763 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:32.763 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:32.763 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:32.763 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:32.763 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.763 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:32.763 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:32.763 08:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.019 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:33.275 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:33.275 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:33.275 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:33.275 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:33.275 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:33.275 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:33.275 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.275 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.532 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.788 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.789 08:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:34.046 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:34.046 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:34.046 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:34.046 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:34.046 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:34.046 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:34.046 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.046 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:34.303 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.303 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.303 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:34.303 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.303 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.303 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:34.303 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.303 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.303 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:34.303 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.303 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.303 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:34.304 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.304 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.304 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:34.304 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.304 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.304 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:34.304 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.304 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.304 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:34.304 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.304 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.304 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:34.561 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.561 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:34.561 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:34.561 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:34.561 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:34.561 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:34.561 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:34.561 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:34.819 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:35.077 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.077 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:35.077 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:35.077 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:35.077 08:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:35.077 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:35.077 rmmod nvme_tcp 00:28:35.077 rmmod nvme_fabrics 00:28:35.333 rmmod nvme_keyring 00:28:35.333 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:35.333 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:35.333 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:35.333 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2616836 ']' 00:28:35.333 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2616836 00:28:35.333 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2616836 ']' 00:28:35.333 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2616836 00:28:35.333 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:35.333 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.333 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2616836 00:28:35.333 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:35.333 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:35.333 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2616836' 00:28:35.333 killing process with pid 2616836 00:28:35.333 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2616836 00:28:35.333 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2616836 00:28:35.590 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:35.590 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:35.590 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:35.590 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:35.590 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:35.590 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:35.590 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:35.590 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:35.590 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:35.590 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.590 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:35.590 08:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.489 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:37.489 00:28:37.489 real 0m46.772s 00:28:37.489 user 2m57.975s 00:28:37.489 sys 0m19.863s 00:28:37.489 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.489 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:37.489 ************************************ 00:28:37.489 END TEST nvmf_ns_hotplug_stress 00:28:37.489 ************************************ 00:28:37.489 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:37.489 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:37.489 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.489 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:37.489 ************************************ 00:28:37.489 START TEST nvmf_delete_subsystem 00:28:37.489 ************************************ 00:28:37.489 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:37.747 * Looking for test storage... 00:28:37.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.747 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:37.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.748 --rc genhtml_branch_coverage=1 00:28:37.748 --rc genhtml_function_coverage=1 00:28:37.748 --rc genhtml_legend=1 00:28:37.748 --rc geninfo_all_blocks=1 00:28:37.748 --rc geninfo_unexecuted_blocks=1 00:28:37.748 00:28:37.748 ' 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:37.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.748 --rc genhtml_branch_coverage=1 00:28:37.748 --rc genhtml_function_coverage=1 00:28:37.748 --rc genhtml_legend=1 00:28:37.748 --rc geninfo_all_blocks=1 00:28:37.748 --rc geninfo_unexecuted_blocks=1 00:28:37.748 00:28:37.748 ' 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:37.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.748 --rc genhtml_branch_coverage=1 00:28:37.748 --rc genhtml_function_coverage=1 00:28:37.748 --rc genhtml_legend=1 00:28:37.748 --rc geninfo_all_blocks=1 00:28:37.748 --rc geninfo_unexecuted_blocks=1 00:28:37.748 00:28:37.748 ' 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:37.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.748 --rc genhtml_branch_coverage=1 00:28:37.748 --rc genhtml_function_coverage=1 00:28:37.748 --rc genhtml_legend=1 00:28:37.748 --rc geninfo_all_blocks=1 00:28:37.748 --rc geninfo_unexecuted_blocks=1 00:28:37.748 00:28:37.748 ' 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:37.748 08:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:43.008 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:43.008 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:43.008 Found net devices under 0000:86:00.0: cvl_0_0 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:43.008 Found net devices under 0000:86:00.1: cvl_0_1 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.008 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:43.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:28:43.009 00:28:43.009 --- 10.0.0.2 ping statistics --- 00:28:43.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.009 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:28:43.009 00:28:43.009 --- 10.0.0.1 ping statistics --- 00:28:43.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.009 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2626754 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2626754 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2626754 ']' 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.009 08:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.009 [2024-11-27 08:11:37.038149] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:43.009 [2024-11-27 08:11:37.039110] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:28:43.009 [2024-11-27 08:11:37.039146] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.009 [2024-11-27 08:11:37.105209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:43.266 [2024-11-27 08:11:37.148597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.266 [2024-11-27 08:11:37.148634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.266 [2024-11-27 08:11:37.148642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.266 [2024-11-27 08:11:37.148648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.266 [2024-11-27 08:11:37.148654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.266 [2024-11-27 08:11:37.149831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.266 [2024-11-27 08:11:37.149835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.266 [2024-11-27 08:11:37.219288] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:43.266 [2024-11-27 08:11:37.219549] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:43.266 [2024-11-27 08:11:37.219604] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:43.266 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:43.266 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:43.266 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:43.266 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:43.266 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.267 [2024-11-27 08:11:37.286317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.267 [2024-11-27 08:11:37.302514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.267 NULL1 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.267 Delay0 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2626818 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:43.267 08:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:43.522 [2024-11-27 08:11:37.383718] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:45.435 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:45.435 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.435 08:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 starting I/O failed: -6 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 Write completed with error (sct=0, sc=8) 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 starting I/O failed: -6 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 starting I/O failed: -6 00:28:45.435 Write completed with error (sct=0, sc=8) 00:28:45.435 Write completed with error (sct=0, sc=8) 00:28:45.435 Write completed with error (sct=0, sc=8) 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 starting I/O failed: -6 00:28:45.435 Write completed with error (sct=0, sc=8) 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 starting I/O failed: -6 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 starting I/O failed: -6 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.435 starting I/O failed: -6 00:28:45.435 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 starting I/O failed: -6 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 starting I/O failed: -6 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 starting I/O failed: -6 00:28:45.436 [2024-11-27 08:11:39.481522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f751c000c40 is same with the state(6) to be set 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 starting I/O failed: -6 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 starting I/O failed: -6 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 starting I/O failed: -6 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 starting I/O failed: -6 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 starting I/O failed: -6 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 starting I/O failed: -6 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 starting I/O failed: -6 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 starting I/O failed: -6 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 starting I/O failed: -6 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 starting I/O failed: -6 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 starting I/O failed: -6 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 starting I/O failed: -6 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 [2024-11-27 08:11:39.482058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3680 is same with the state(6) to be set 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Read completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 Write completed with error (sct=0, sc=8) 00:28:45.436 [2024-11-27 08:11:39.482279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f751c00d020 is same with the state(6) to be set 00:28:46.364 [2024-11-27 08:11:40.437545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e49b0 is same with the state(6) to be set 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 [2024-11-27 08:11:40.483203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e32c0 is same with the state(6) to be set 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 [2024-11-27 08:11:40.483346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e34a0 is same with the state(6) to be set 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 [2024-11-27 08:11:40.483476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3860 is same with the state(6) to be set 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Write completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.623 Read completed with error (sct=0, sc=8) 00:28:46.624 [2024-11-27 08:11:40.484243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f751c00d350 is same with the state(6) to be set 00:28:46.624 Initializing NVMe Controllers 00:28:46.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:46.624 Controller IO queue size 128, less than required. 00:28:46.624 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:46.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:46.624 Initialization complete. Launching workers. 00:28:46.624 ======================================================== 00:28:46.624 Latency(us) 00:28:46.624 Device Information : IOPS MiB/s Average min max 00:28:46.624 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.61 0.09 960140.43 622.32 1012666.46 00:28:46.624 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.28 0.08 880693.33 416.14 1043020.33 00:28:46.624 ======================================================== 00:28:46.624 Total : 330.89 0.16 923096.88 416.14 1043020.33 00:28:46.624 00:28:46.624 [2024-11-27 08:11:40.485143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e49b0 (9): Bad file descriptor 00:28:46.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:46.624 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.624 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:46.624 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2626818 00:28:46.624 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:47.188 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:47.188 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2626818 00:28:47.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2626818) - No such process 00:28:47.188 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2626818 00:28:47.188 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:47.188 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2626818 00:28:47.188 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:47.188 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:47.188 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:47.188 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:47.188 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2626818 00:28:47.188 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:47.188 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:47.188 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:47.188 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:47.188 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:47.188 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.188 08:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:47.188 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.188 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:47.189 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.189 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:47.189 [2024-11-27 08:11:41.006495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.189 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.189 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:47.189 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.189 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:47.189 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.189 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2627322 00:28:47.189 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:47.189 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:47.189 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627322 00:28:47.189 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:47.189 [2024-11-27 08:11:41.072866] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:47.445 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:47.445 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627322 00:28:47.445 08:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:48.009 08:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:48.009 08:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627322 00:28:48.009 08:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:48.573 08:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:48.573 08:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627322 00:28:48.573 08:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:49.136 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:49.136 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627322 00:28:49.136 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:49.700 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:49.700 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627322 00:28:49.700 08:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:49.957 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:49.957 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627322 00:28:49.957 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:50.214 Initializing NVMe Controllers 00:28:50.214 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:50.214 Controller IO queue size 128, less than required. 00:28:50.214 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:50.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:50.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:50.214 Initialization complete. Launching workers. 00:28:50.214 ======================================================== 00:28:50.214 Latency(us) 00:28:50.214 Device Information : IOPS MiB/s Average min max 00:28:50.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003177.15 1000146.81 1011134.04 00:28:50.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005308.95 1000218.37 1011482.08 00:28:50.214 ======================================================== 00:28:50.214 Total : 256.00 0.12 1004243.05 1000146.81 1011482.08 00:28:50.214 00:28:50.471 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:50.471 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627322 00:28:50.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2627322) - No such process 00:28:50.471 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2627322 00:28:50.471 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:50.471 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:50.471 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:50.471 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:50.471 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:50.471 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:50.471 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:50.471 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:50.471 rmmod nvme_tcp 00:28:50.729 rmmod nvme_fabrics 00:28:50.729 rmmod nvme_keyring 00:28:50.729 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:50.729 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:50.729 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:50.729 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2626754 ']' 00:28:50.729 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2626754 00:28:50.729 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2626754 ']' 00:28:50.729 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2626754 00:28:50.729 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:50.729 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:50.729 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2626754 00:28:50.729 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:50.729 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:50.729 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2626754' 00:28:50.729 killing process with pid 2626754 00:28:50.729 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2626754 00:28:50.729 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2626754 00:28:50.987 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:50.987 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:50.987 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:50.987 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:50.987 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:50.987 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:50.987 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:50.988 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:50.988 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:50.988 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.988 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.988 08:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.889 08:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:52.889 00:28:52.889 real 0m15.324s 00:28:52.889 user 0m25.774s 00:28:52.889 sys 0m5.508s 00:28:52.889 08:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:52.889 08:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:52.889 ************************************ 00:28:52.889 END TEST nvmf_delete_subsystem 00:28:52.889 ************************************ 00:28:52.889 08:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:52.889 08:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:52.889 08:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:52.889 08:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:52.889 ************************************ 00:28:52.889 START TEST nvmf_host_management 00:28:52.889 ************************************ 00:28:52.889 08:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:53.148 * Looking for test storage... 00:28:53.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:53.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.149 --rc genhtml_branch_coverage=1 00:28:53.149 --rc genhtml_function_coverage=1 00:28:53.149 --rc genhtml_legend=1 00:28:53.149 --rc geninfo_all_blocks=1 00:28:53.149 --rc geninfo_unexecuted_blocks=1 00:28:53.149 00:28:53.149 ' 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:53.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.149 --rc genhtml_branch_coverage=1 00:28:53.149 --rc genhtml_function_coverage=1 00:28:53.149 --rc genhtml_legend=1 00:28:53.149 --rc geninfo_all_blocks=1 00:28:53.149 --rc geninfo_unexecuted_blocks=1 00:28:53.149 00:28:53.149 ' 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:53.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.149 --rc genhtml_branch_coverage=1 00:28:53.149 --rc genhtml_function_coverage=1 00:28:53.149 --rc genhtml_legend=1 00:28:53.149 --rc geninfo_all_blocks=1 00:28:53.149 --rc geninfo_unexecuted_blocks=1 00:28:53.149 00:28:53.149 ' 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:53.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.149 --rc genhtml_branch_coverage=1 00:28:53.149 --rc genhtml_function_coverage=1 00:28:53.149 --rc genhtml_legend=1 00:28:53.149 --rc geninfo_all_blocks=1 00:28:53.149 --rc geninfo_unexecuted_blocks=1 00:28:53.149 00:28:53.149 ' 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.149 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:53.150 08:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:58.408 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:58.408 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:58.408 Found net devices under 0000:86:00.0: cvl_0_0 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:58.408 Found net devices under 0000:86:00.1: cvl_0_1 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.408 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.409 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:58.409 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:58.409 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.409 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.409 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:58.409 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:58.409 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.409 08:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:58.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:28:58.409 00:28:58.409 --- 10.0.0.2 ping statistics --- 00:28:58.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.409 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:28:58.409 00:28:58.409 --- 10.0.0.1 ping statistics --- 00:28:58.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.409 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2631275 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2631275 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2631275 ']' 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:58.409 [2024-11-27 08:11:52.225775] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:58.409 [2024-11-27 08:11:52.226699] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:28:58.409 [2024-11-27 08:11:52.226733] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.409 [2024-11-27 08:11:52.291412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:58.409 [2024-11-27 08:11:52.334538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.409 [2024-11-27 08:11:52.334575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.409 [2024-11-27 08:11:52.334584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.409 [2024-11-27 08:11:52.334591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.409 [2024-11-27 08:11:52.334596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.409 [2024-11-27 08:11:52.336233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.409 [2024-11-27 08:11:52.336320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:58.409 [2024-11-27 08:11:52.336362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.409 [2024-11-27 08:11:52.336363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:58.409 [2024-11-27 08:11:52.404924] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:58.409 [2024-11-27 08:11:52.405090] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:58.409 [2024-11-27 08:11:52.405512] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:58.409 [2024-11-27 08:11:52.405540] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:58.409 [2024-11-27 08:11:52.405707] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:58.409 [2024-11-27 08:11:52.465056] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.409 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:58.668 Malloc0 00:28:58.668 [2024-11-27 08:11:52.541039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2631411 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2631411 /var/tmp/bdevperf.sock 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2631411 ']' 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:58.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.668 { 00:28:58.668 "params": { 00:28:58.668 "name": "Nvme$subsystem", 00:28:58.668 "trtype": "$TEST_TRANSPORT", 00:28:58.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.668 "adrfam": "ipv4", 00:28:58.668 "trsvcid": "$NVMF_PORT", 00:28:58.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.668 "hdgst": ${hdgst:-false}, 00:28:58.668 "ddgst": ${ddgst:-false} 00:28:58.668 }, 00:28:58.668 "method": "bdev_nvme_attach_controller" 00:28:58.668 } 00:28:58.668 EOF 00:28:58.668 )") 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:58.668 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:58.668 "params": { 00:28:58.668 "name": "Nvme0", 00:28:58.668 "trtype": "tcp", 00:28:58.668 "traddr": "10.0.0.2", 00:28:58.668 "adrfam": "ipv4", 00:28:58.668 "trsvcid": "4420", 00:28:58.668 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:58.668 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:58.668 "hdgst": false, 00:28:58.668 "ddgst": false 00:28:58.668 }, 00:28:58.668 "method": "bdev_nvme_attach_controller" 00:28:58.668 }' 00:28:58.668 [2024-11-27 08:11:52.636739] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:28:58.668 [2024-11-27 08:11:52.636789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631411 ] 00:28:58.668 [2024-11-27 08:11:52.701989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.668 [2024-11-27 08:11:52.743645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.926 Running I/O for 10 seconds... 00:28:58.926 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:58.926 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:58.926 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:58.926 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.926 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:58.926 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.926 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:58.926 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:58.926 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:58.926 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:58.926 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:58.926 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:58.926 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:58.926 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:58.926 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:58.926 08:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:58.926 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.926 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:58.926 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.184 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=94 00:28:59.184 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 94 -ge 100 ']' 00:28:59.184 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:59.443 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:59.443 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:59.443 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:59.443 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:59.443 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.443 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.443 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.443 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:28:59.444 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:28:59.444 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:59.444 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:59.444 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:59.444 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:59.444 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.444 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.444 [2024-11-27 08:11:53.329171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:59.444 [2024-11-27 08:11:53.329208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.444 [2024-11-27 08:11:53.329218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:59.444 [2024-11-27 08:11:53.329226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.444 [2024-11-27 08:11:53.329233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:59.444 [2024-11-27 08:11:53.329240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.444 [2024-11-27 08:11:53.329247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:59.444 [2024-11-27 08:11:53.329254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.444 [2024-11-27 08:11:53.329261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76a510 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63d70 is same with the state(6) to be set 00:28:59.444 [2024-11-27 08:11:53.332926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.332957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.332975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.332983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.332996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.445 [2024-11-27 08:11:53.333570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.445 [2024-11-27 08:11:53.333581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.446 [2024-11-27 08:11:53.333620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.446 [2024-11-27 08:11:53.333955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.446 [2024-11-27 08:11:53.333963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x983430 is same with the state(6) to be set 00:28:59.446 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:59.446 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.446 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.446 [2024-11-27 08:11:53.334945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:59.446 task offset: 90112 on job bdev=Nvme0n1 fails 00:28:59.446 00:28:59.446 Latency(us) 00:28:59.446 [2024-11-27T07:11:53.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.446 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:59.446 Job: Nvme0n1 ended in about 0.39 seconds with error 00:28:59.446 Verification LBA range: start 0x0 length 0x400 00:28:59.446 Nvme0n1 : 0.39 1792.71 112.04 162.97 0.00 31828.29 3818.18 27696.08 00:28:59.446 [2024-11-27T07:11:53.555Z] =================================================================================================================== 00:28:59.446 [2024-11-27T07:11:53.555Z] Total : 1792.71 112.04 162.97 0.00 31828.29 3818.18 27696.08 00:28:59.446 [2024-11-27 08:11:53.337343] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:59.446 [2024-11-27 08:11:53.337364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76a510 (9): Bad file descriptor 00:28:59.446 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.446 08:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:59.446 [2024-11-27 08:11:53.431115] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:00.378 08:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2631411 00:29:00.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2631411) - No such process 00:29:00.378 08:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:00.378 08:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:00.378 08:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:00.378 08:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:00.378 08:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:00.378 08:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:00.378 08:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.378 08:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.378 { 00:29:00.378 "params": { 00:29:00.378 "name": "Nvme$subsystem", 00:29:00.378 "trtype": "$TEST_TRANSPORT", 00:29:00.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.378 "adrfam": "ipv4", 00:29:00.378 "trsvcid": "$NVMF_PORT", 00:29:00.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.378 "hdgst": ${hdgst:-false}, 00:29:00.378 "ddgst": ${ddgst:-false} 00:29:00.378 }, 00:29:00.378 "method": "bdev_nvme_attach_controller" 00:29:00.378 } 00:29:00.378 EOF 00:29:00.378 )") 00:29:00.378 08:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:00.378 08:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:00.378 08:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:00.378 08:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:00.378 "params": { 00:29:00.378 "name": "Nvme0", 00:29:00.378 "trtype": "tcp", 00:29:00.378 "traddr": "10.0.0.2", 00:29:00.378 "adrfam": "ipv4", 00:29:00.378 "trsvcid": "4420", 00:29:00.378 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:00.378 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:00.378 "hdgst": false, 00:29:00.378 "ddgst": false 00:29:00.378 }, 00:29:00.378 "method": "bdev_nvme_attach_controller" 00:29:00.378 }' 00:29:00.378 [2024-11-27 08:11:54.397116] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:29:00.378 [2024-11-27 08:11:54.397168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631783 ] 00:29:00.378 [2024-11-27 08:11:54.461068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.636 [2024-11-27 08:11:54.501865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.894 Running I/O for 1 seconds... 00:29:01.826 1920.00 IOPS, 120.00 MiB/s 00:29:01.826 Latency(us) 00:29:01.826 [2024-11-27T07:11:55.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.826 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.826 Verification LBA range: start 0x0 length 0x400 00:29:01.826 Nvme0n1 : 1.01 1955.49 122.22 0.00 0.00 32219.43 7066.49 27582.11 00:29:01.826 [2024-11-27T07:11:55.935Z] =================================================================================================================== 00:29:01.826 [2024-11-27T07:11:55.935Z] Total : 1955.49 122.22 0.00 0.00 32219.43 7066.49 27582.11 00:29:02.083 08:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:02.083 08:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:02.083 rmmod nvme_tcp 00:29:02.083 rmmod nvme_fabrics 00:29:02.083 rmmod nvme_keyring 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2631275 ']' 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2631275 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2631275 ']' 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2631275 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2631275 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2631275' 00:29:02.083 killing process with pid 2631275 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2631275 00:29:02.083 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2631275 00:29:02.341 [2024-11-27 08:11:56.276655] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:02.341 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:02.341 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:02.341 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:02.341 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:02.341 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:02.341 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:02.341 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:02.341 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:02.341 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:02.341 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.341 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.341 08:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.870 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:04.870 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:04.870 00:29:04.870 real 0m11.375s 00:29:04.870 user 0m17.600s 00:29:04.870 sys 0m5.628s 00:29:04.870 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:04.871 ************************************ 00:29:04.871 END TEST nvmf_host_management 00:29:04.871 ************************************ 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:04.871 ************************************ 00:29:04.871 START TEST nvmf_lvol 00:29:04.871 ************************************ 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:04.871 * Looking for test storage... 00:29:04.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:04.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.871 --rc genhtml_branch_coverage=1 00:29:04.871 --rc genhtml_function_coverage=1 00:29:04.871 --rc genhtml_legend=1 00:29:04.871 --rc geninfo_all_blocks=1 00:29:04.871 --rc geninfo_unexecuted_blocks=1 00:29:04.871 00:29:04.871 ' 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:04.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.871 --rc genhtml_branch_coverage=1 00:29:04.871 --rc genhtml_function_coverage=1 00:29:04.871 --rc genhtml_legend=1 00:29:04.871 --rc geninfo_all_blocks=1 00:29:04.871 --rc geninfo_unexecuted_blocks=1 00:29:04.871 00:29:04.871 ' 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:04.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.871 --rc genhtml_branch_coverage=1 00:29:04.871 --rc genhtml_function_coverage=1 00:29:04.871 --rc genhtml_legend=1 00:29:04.871 --rc geninfo_all_blocks=1 00:29:04.871 --rc geninfo_unexecuted_blocks=1 00:29:04.871 00:29:04.871 ' 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:04.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.871 --rc genhtml_branch_coverage=1 00:29:04.871 --rc genhtml_function_coverage=1 00:29:04.871 --rc genhtml_legend=1 00:29:04.871 --rc geninfo_all_blocks=1 00:29:04.871 --rc geninfo_unexecuted_blocks=1 00:29:04.871 00:29:04.871 ' 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.871 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:04.872 08:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.130 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:10.131 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:10.131 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:10.131 Found net devices under 0000:86:00.0: cvl_0_0 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:10.131 Found net devices under 0000:86:00.1: cvl_0_1 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:10.131 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:10.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:29:10.131 00:29:10.131 --- 10.0.0.2 ping statistics --- 00:29:10.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.131 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:10.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:29:10.132 00:29:10.132 --- 10.0.0.1 ping statistics --- 00:29:10.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.132 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2635443 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2635443 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2635443 ']' 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:10.132 [2024-11-27 08:12:03.774546] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:10.132 [2024-11-27 08:12:03.775482] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:29:10.132 [2024-11-27 08:12:03.775515] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.132 [2024-11-27 08:12:03.841642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:10.132 [2024-11-27 08:12:03.883961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.132 [2024-11-27 08:12:03.883998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.132 [2024-11-27 08:12:03.884005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.132 [2024-11-27 08:12:03.884011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.132 [2024-11-27 08:12:03.884016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.132 [2024-11-27 08:12:03.885324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.132 [2024-11-27 08:12:03.885423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:10.132 [2024-11-27 08:12:03.885427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.132 [2024-11-27 08:12:03.954889] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:10.132 [2024-11-27 08:12:03.954931] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:10.132 [2024-11-27 08:12:03.955024] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:10.132 [2024-11-27 08:12:03.955126] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:10.132 08:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:10.132 08:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.132 08:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:10.132 [2024-11-27 08:12:04.193981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.132 08:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:10.389 08:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:10.389 08:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:10.646 08:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:10.646 08:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:10.904 08:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:11.162 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=03b41f06-178c-4152-8a28-ae4e28ecf49a 00:29:11.162 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 03b41f06-178c-4152-8a28-ae4e28ecf49a lvol 20 00:29:11.162 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1d864fe7-9863-4fe5-a558-1a78c3afcf60 00:29:11.162 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:11.419 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1d864fe7-9863-4fe5-a558-1a78c3afcf60 00:29:11.676 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:11.932 [2024-11-27 08:12:05.794038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.932 08:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:11.932 08:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2635927 00:29:11.932 08:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:11.932 08:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:13.301 08:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1d864fe7-9863-4fe5-a558-1a78c3afcf60 MY_SNAPSHOT 00:29:13.301 08:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f640c49f-280d-4eb9-bb08-650eb778bfcd 00:29:13.301 08:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1d864fe7-9863-4fe5-a558-1a78c3afcf60 30 00:29:13.558 08:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f640c49f-280d-4eb9-bb08-650eb778bfcd MY_CLONE 00:29:13.815 08:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=63276949-a125-4d0a-9f57-b0220bb0b9f8 00:29:13.815 08:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 63276949-a125-4d0a-9f57-b0220bb0b9f8 00:29:14.379 08:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2635927 00:29:22.476 Initializing NVMe Controllers 00:29:22.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:22.476 Controller IO queue size 128, less than required. 00:29:22.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:22.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:22.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:22.476 Initialization complete. Launching workers. 00:29:22.476 ======================================================== 00:29:22.476 Latency(us) 00:29:22.476 Device Information : IOPS MiB/s Average min max 00:29:22.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12029.70 46.99 10643.49 1579.49 63898.62 00:29:22.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12161.30 47.51 10528.62 522.51 63489.67 00:29:22.476 ======================================================== 00:29:22.476 Total : 24191.00 94.50 10585.74 522.51 63898.62 00:29:22.476 00:29:22.476 08:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:22.734 08:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1d864fe7-9863-4fe5-a558-1a78c3afcf60 00:29:22.734 08:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 03b41f06-178c-4152-8a28-ae4e28ecf49a 00:29:22.991 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:22.991 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:22.991 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:22.991 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:22.991 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:22.991 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:22.991 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:22.991 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:22.991 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:22.991 rmmod nvme_tcp 00:29:22.991 rmmod nvme_fabrics 00:29:22.991 rmmod nvme_keyring 00:29:22.991 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:22.991 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:22.991 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:22.991 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2635443 ']' 00:29:22.991 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2635443 00:29:22.991 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2635443 ']' 00:29:22.991 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2635443 00:29:22.991 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:23.249 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:23.249 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2635443 00:29:23.249 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:23.249 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:23.249 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2635443' 00:29:23.249 killing process with pid 2635443 00:29:23.249 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2635443 00:29:23.249 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2635443 00:29:23.506 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:23.506 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:23.506 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:23.506 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:23.506 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:23.506 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:23.506 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:23.506 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:23.506 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:23.506 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.506 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.506 08:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.405 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:25.405 00:29:25.405 real 0m20.998s 00:29:25.405 user 0m55.400s 00:29:25.405 sys 0m9.262s 00:29:25.405 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.405 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:25.405 ************************************ 00:29:25.405 END TEST nvmf_lvol 00:29:25.405 ************************************ 00:29:25.405 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:25.405 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:25.405 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:25.405 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:25.405 ************************************ 00:29:25.405 START TEST nvmf_lvs_grow 00:29:25.405 ************************************ 00:29:25.405 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:25.664 * Looking for test storage... 00:29:25.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:25.664 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:25.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.665 --rc genhtml_branch_coverage=1 00:29:25.665 --rc genhtml_function_coverage=1 00:29:25.665 --rc genhtml_legend=1 00:29:25.665 --rc geninfo_all_blocks=1 00:29:25.665 --rc geninfo_unexecuted_blocks=1 00:29:25.665 00:29:25.665 ' 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:25.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.665 --rc genhtml_branch_coverage=1 00:29:25.665 --rc genhtml_function_coverage=1 00:29:25.665 --rc genhtml_legend=1 00:29:25.665 --rc geninfo_all_blocks=1 00:29:25.665 --rc geninfo_unexecuted_blocks=1 00:29:25.665 00:29:25.665 ' 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:25.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.665 --rc genhtml_branch_coverage=1 00:29:25.665 --rc genhtml_function_coverage=1 00:29:25.665 --rc genhtml_legend=1 00:29:25.665 --rc geninfo_all_blocks=1 00:29:25.665 --rc geninfo_unexecuted_blocks=1 00:29:25.665 00:29:25.665 ' 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:25.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.665 --rc genhtml_branch_coverage=1 00:29:25.665 --rc genhtml_function_coverage=1 00:29:25.665 --rc genhtml_legend=1 00:29:25.665 --rc geninfo_all_blocks=1 00:29:25.665 --rc geninfo_unexecuted_blocks=1 00:29:25.665 00:29:25.665 ' 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:25.665 08:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.925 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:30.926 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:30.926 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:30.926 Found net devices under 0000:86:00.0: cvl_0_0 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:30.926 Found net devices under 0000:86:00.1: cvl_0_1 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:30.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:29:30.926 00:29:30.926 --- 10.0.0.2 ping statistics --- 00:29:30.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.926 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:30.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:29:30.926 00:29:30.926 --- 10.0.0.1 ping statistics --- 00:29:30.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.926 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2641449 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2641449 00:29:30.926 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2641449 ']' 00:29:30.927 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:30.927 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.927 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.927 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.927 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.927 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:30.927 [2024-11-27 08:12:24.772439] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:30.927 [2024-11-27 08:12:24.773356] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:29:30.927 [2024-11-27 08:12:24.773390] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.927 [2024-11-27 08:12:24.838819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.927 [2024-11-27 08:12:24.880828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.927 [2024-11-27 08:12:24.880860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.927 [2024-11-27 08:12:24.880867] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.927 [2024-11-27 08:12:24.880873] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.927 [2024-11-27 08:12:24.880878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.927 [2024-11-27 08:12:24.881399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.927 [2024-11-27 08:12:24.949362] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:30.927 [2024-11-27 08:12:24.949575] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:30.927 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.927 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:30.927 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:30.927 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:30.927 08:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:30.927 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.927 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:31.184 [2024-11-27 08:12:25.177834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.184 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:31.184 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:31.184 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:31.184 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:31.184 ************************************ 00:29:31.184 START TEST lvs_grow_clean 00:29:31.184 ************************************ 00:29:31.184 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:31.184 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:31.184 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:31.184 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:31.184 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:31.184 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:31.184 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:31.184 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:31.184 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:31.184 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:31.441 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:31.441 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:31.698 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0060746d-cc62-493a-9b78-fc57d8a9042e 00:29:31.698 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:31.698 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0060746d-cc62-493a-9b78-fc57d8a9042e 00:29:31.955 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:31.955 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:31.956 08:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0060746d-cc62-493a-9b78-fc57d8a9042e lvol 150 00:29:31.956 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1ba6864a-311a-4026-845a-354cfa4d750f 00:29:31.956 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:31.956 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:32.213 [2024-11-27 08:12:26.213779] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:32.213 [2024-11-27 08:12:26.213847] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:32.213 true 00:29:32.213 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0060746d-cc62-493a-9b78-fc57d8a9042e 00:29:32.213 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:32.471 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:32.471 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:32.728 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1ba6864a-311a-4026-845a-354cfa4d750f 00:29:32.728 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:32.986 [2024-11-27 08:12:26.986015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.986 08:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:33.244 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2641940 00:29:33.244 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:33.244 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2641940 /var/tmp/bdevperf.sock 00:29:33.244 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2641940 ']' 00:29:33.244 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:33.244 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.244 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:33.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:33.244 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.244 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:33.244 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:33.244 [2024-11-27 08:12:27.237261] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:29:33.244 [2024-11-27 08:12:27.237311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2641940 ] 00:29:33.244 [2024-11-27 08:12:27.298691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.244 [2024-11-27 08:12:27.341787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.502 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.502 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:33.502 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:33.761 Nvme0n1 00:29:33.761 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:34.019 [ 00:29:34.020 { 00:29:34.020 "name": "Nvme0n1", 00:29:34.020 "aliases": [ 00:29:34.020 "1ba6864a-311a-4026-845a-354cfa4d750f" 00:29:34.020 ], 00:29:34.020 "product_name": "NVMe disk", 00:29:34.020 "block_size": 4096, 00:29:34.020 "num_blocks": 38912, 00:29:34.020 "uuid": "1ba6864a-311a-4026-845a-354cfa4d750f", 00:29:34.020 "numa_id": 1, 00:29:34.020 "assigned_rate_limits": { 00:29:34.020 "rw_ios_per_sec": 0, 00:29:34.020 "rw_mbytes_per_sec": 0, 00:29:34.020 "r_mbytes_per_sec": 0, 00:29:34.020 "w_mbytes_per_sec": 0 00:29:34.020 }, 00:29:34.020 "claimed": false, 00:29:34.020 "zoned": false, 00:29:34.020 "supported_io_types": { 00:29:34.020 "read": true, 00:29:34.020 "write": true, 00:29:34.020 "unmap": true, 00:29:34.020 "flush": true, 00:29:34.020 "reset": true, 00:29:34.020 "nvme_admin": true, 00:29:34.020 "nvme_io": true, 00:29:34.020 "nvme_io_md": false, 00:29:34.020 "write_zeroes": true, 00:29:34.020 "zcopy": false, 00:29:34.020 "get_zone_info": false, 00:29:34.020 "zone_management": false, 00:29:34.020 "zone_append": false, 00:29:34.020 "compare": true, 00:29:34.020 "compare_and_write": true, 00:29:34.020 "abort": true, 00:29:34.020 "seek_hole": false, 00:29:34.020 "seek_data": false, 00:29:34.020 "copy": true, 00:29:34.020 "nvme_iov_md": false 00:29:34.020 }, 00:29:34.020 "memory_domains": [ 00:29:34.020 { 00:29:34.020 "dma_device_id": "system", 00:29:34.020 "dma_device_type": 1 00:29:34.020 } 00:29:34.020 ], 00:29:34.020 "driver_specific": { 00:29:34.020 "nvme": [ 00:29:34.020 { 00:29:34.020 "trid": { 00:29:34.020 "trtype": "TCP", 00:29:34.020 "adrfam": "IPv4", 00:29:34.020 "traddr": "10.0.0.2", 00:29:34.020 "trsvcid": "4420", 00:29:34.020 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:34.020 }, 00:29:34.020 "ctrlr_data": { 00:29:34.020 "cntlid": 1, 00:29:34.020 "vendor_id": "0x8086", 00:29:34.020 "model_number": "SPDK bdev Controller", 00:29:34.020 "serial_number": "SPDK0", 00:29:34.020 "firmware_revision": "25.01", 00:29:34.020 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:34.020 "oacs": { 00:29:34.020 "security": 0, 00:29:34.020 "format": 0, 00:29:34.020 "firmware": 0, 00:29:34.020 "ns_manage": 0 00:29:34.020 }, 00:29:34.020 "multi_ctrlr": true, 00:29:34.020 "ana_reporting": false 00:29:34.020 }, 00:29:34.020 "vs": { 00:29:34.020 "nvme_version": "1.3" 00:29:34.020 }, 00:29:34.020 "ns_data": { 00:29:34.020 "id": 1, 00:29:34.020 "can_share": true 00:29:34.020 } 00:29:34.020 } 00:29:34.020 ], 00:29:34.020 "mp_policy": "active_passive" 00:29:34.020 } 00:29:34.020 } 00:29:34.020 ] 00:29:34.020 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2641955 00:29:34.020 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:34.020 08:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:34.020 Running I/O for 10 seconds... 00:29:34.958 Latency(us) 00:29:34.958 [2024-11-27T07:12:29.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.958 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.958 Nvme0n1 : 1.00 21844.00 85.33 0.00 0.00 0.00 0.00 0.00 00:29:34.958 [2024-11-27T07:12:29.067Z] =================================================================================================================== 00:29:34.958 [2024-11-27T07:12:29.067Z] Total : 21844.00 85.33 0.00 0.00 0.00 0.00 0.00 00:29:34.958 00:29:35.904 08:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0060746d-cc62-493a-9b78-fc57d8a9042e 00:29:36.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.162 Nvme0n1 : 2.00 22034.50 86.07 0.00 0.00 0.00 0.00 0.00 00:29:36.162 [2024-11-27T07:12:30.271Z] =================================================================================================================== 00:29:36.162 [2024-11-27T07:12:30.272Z] Total : 22034.50 86.07 0.00 0.00 0.00 0.00 0.00 00:29:36.163 00:29:36.163 true 00:29:36.163 08:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0060746d-cc62-493a-9b78-fc57d8a9042e 00:29:36.163 08:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:36.421 08:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:36.421 08:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:36.421 08:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2641955 00:29:36.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.989 Nvme0n1 : 3.00 21992.33 85.91 0.00 0.00 0.00 0.00 0.00 00:29:36.989 [2024-11-27T07:12:31.098Z] =================================================================================================================== 00:29:36.989 [2024-11-27T07:12:31.098Z] Total : 21992.33 85.91 0.00 0.00 0.00 0.00 0.00 00:29:36.989 00:29:38.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:38.366 Nvme0n1 : 4.00 22082.25 86.26 0.00 0.00 0.00 0.00 0.00 00:29:38.366 [2024-11-27T07:12:32.475Z] =================================================================================================================== 00:29:38.366 [2024-11-27T07:12:32.475Z] Total : 22082.25 86.26 0.00 0.00 0.00 0.00 0.00 00:29:38.366 00:29:38.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:38.934 Nvme0n1 : 5.00 22136.20 86.47 0.00 0.00 0.00 0.00 0.00 00:29:38.934 [2024-11-27T07:12:33.043Z] =================================================================================================================== 00:29:38.934 [2024-11-27T07:12:33.043Z] Total : 22136.20 86.47 0.00 0.00 0.00 0.00 0.00 00:29:38.934 00:29:40.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:40.314 Nvme0n1 : 6.00 22182.83 86.65 0.00 0.00 0.00 0.00 0.00 00:29:40.314 [2024-11-27T07:12:34.423Z] =================================================================================================================== 00:29:40.314 [2024-11-27T07:12:34.423Z] Total : 22182.83 86.65 0.00 0.00 0.00 0.00 0.00 00:29:40.314 00:29:41.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:41.253 Nvme0n1 : 7.00 22225.14 86.82 0.00 0.00 0.00 0.00 0.00 00:29:41.253 [2024-11-27T07:12:35.362Z] =================================================================================================================== 00:29:41.253 [2024-11-27T07:12:35.362Z] Total : 22225.14 86.82 0.00 0.00 0.00 0.00 0.00 00:29:41.253 00:29:42.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:42.191 Nvme0n1 : 8.00 22241.00 86.88 0.00 0.00 0.00 0.00 0.00 00:29:42.191 [2024-11-27T07:12:36.300Z] =================================================================================================================== 00:29:42.191 [2024-11-27T07:12:36.300Z] Total : 22241.00 86.88 0.00 0.00 0.00 0.00 0.00 00:29:42.191 00:29:43.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:43.131 Nvme0n1 : 9.00 22267.44 86.98 0.00 0.00 0.00 0.00 0.00 00:29:43.131 [2024-11-27T07:12:37.240Z] =================================================================================================================== 00:29:43.131 [2024-11-27T07:12:37.240Z] Total : 22267.44 86.98 0.00 0.00 0.00 0.00 0.00 00:29:43.131 00:29:44.071 00:29:44.071 Latency(us) 00:29:44.071 [2024-11-27T07:12:38.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:44.071 Nvme0n1 : 10.00 22285.05 87.05 0.00 0.00 5740.68 3590.23 15386.71 00:29:44.071 [2024-11-27T07:12:38.180Z] =================================================================================================================== 00:29:44.071 [2024-11-27T07:12:38.180Z] Total : 22285.05 87.05 0.00 0.00 5740.68 3590.23 15386.71 00:29:44.071 { 00:29:44.071 "results": [ 00:29:44.071 { 00:29:44.071 "job": "Nvme0n1", 00:29:44.071 "core_mask": "0x2", 00:29:44.071 "workload": "randwrite", 00:29:44.071 "status": "finished", 00:29:44.071 "queue_depth": 128, 00:29:44.071 "io_size": 4096, 00:29:44.071 "runtime": 10.001638, 00:29:44.071 "iops": 22285.04970885769, 00:29:44.071 "mibps": 87.05097542522535, 00:29:44.071 "io_failed": 0, 00:29:44.071 "io_timeout": 0, 00:29:44.071 "avg_latency_us": 5740.677623744221, 00:29:44.071 "min_latency_us": 3590.2330434782607, 00:29:44.071 "max_latency_us": 15386.713043478261 00:29:44.071 } 00:29:44.071 ], 00:29:44.071 "core_count": 1 00:29:44.071 } 00:29:44.071 08:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2641940 00:29:44.071 08:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2641940 ']' 00:29:44.071 08:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2641940 00:29:44.071 08:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:44.071 08:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.071 08:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2641940 00:29:44.071 08:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:44.071 08:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:44.071 08:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2641940' 00:29:44.071 killing process with pid 2641940 00:29:44.071 08:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2641940 00:29:44.071 Received shutdown signal, test time was about 10.000000 seconds 00:29:44.071 00:29:44.071 Latency(us) 00:29:44.071 [2024-11-27T07:12:38.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.071 [2024-11-27T07:12:38.180Z] =================================================================================================================== 00:29:44.071 [2024-11-27T07:12:38.180Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.071 08:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2641940 00:29:44.331 08:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:44.591 08:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:44.591 08:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0060746d-cc62-493a-9b78-fc57d8a9042e 00:29:44.591 08:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:44.851 08:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:44.851 08:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:44.851 08:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:45.110 [2024-11-27 08:12:39.049857] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:45.110 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0060746d-cc62-493a-9b78-fc57d8a9042e 00:29:45.110 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:45.110 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0060746d-cc62-493a-9b78-fc57d8a9042e 00:29:45.110 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:45.110 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:45.111 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:45.111 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:45.111 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:45.111 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:45.111 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:45.111 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:45.111 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0060746d-cc62-493a-9b78-fc57d8a9042e 00:29:45.369 request: 00:29:45.369 { 00:29:45.369 "uuid": "0060746d-cc62-493a-9b78-fc57d8a9042e", 00:29:45.369 "method": "bdev_lvol_get_lvstores", 00:29:45.369 "req_id": 1 00:29:45.369 } 00:29:45.369 Got JSON-RPC error response 00:29:45.369 response: 00:29:45.369 { 00:29:45.369 "code": -19, 00:29:45.369 "message": "No such device" 00:29:45.369 } 00:29:45.369 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:45.369 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:45.369 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:45.369 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:45.369 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:45.629 aio_bdev 00:29:45.629 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1ba6864a-311a-4026-845a-354cfa4d750f 00:29:45.629 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=1ba6864a-311a-4026-845a-354cfa4d750f 00:29:45.629 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:45.629 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:45.629 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:45.629 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:45.629 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:45.629 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1ba6864a-311a-4026-845a-354cfa4d750f -t 2000 00:29:45.889 [ 00:29:45.889 { 00:29:45.889 "name": "1ba6864a-311a-4026-845a-354cfa4d750f", 00:29:45.889 "aliases": [ 00:29:45.889 "lvs/lvol" 00:29:45.889 ], 00:29:45.889 "product_name": "Logical Volume", 00:29:45.889 "block_size": 4096, 00:29:45.889 "num_blocks": 38912, 00:29:45.889 "uuid": "1ba6864a-311a-4026-845a-354cfa4d750f", 00:29:45.889 "assigned_rate_limits": { 00:29:45.889 "rw_ios_per_sec": 0, 00:29:45.889 "rw_mbytes_per_sec": 0, 00:29:45.889 "r_mbytes_per_sec": 0, 00:29:45.889 "w_mbytes_per_sec": 0 00:29:45.889 }, 00:29:45.889 "claimed": false, 00:29:45.889 "zoned": false, 00:29:45.889 "supported_io_types": { 00:29:45.889 "read": true, 00:29:45.889 "write": true, 00:29:45.889 "unmap": true, 00:29:45.889 "flush": false, 00:29:45.889 "reset": true, 00:29:45.889 "nvme_admin": false, 00:29:45.889 "nvme_io": false, 00:29:45.889 "nvme_io_md": false, 00:29:45.889 "write_zeroes": true, 00:29:45.889 "zcopy": false, 00:29:45.889 "get_zone_info": false, 00:29:45.889 "zone_management": false, 00:29:45.889 "zone_append": false, 00:29:45.889 "compare": false, 00:29:45.889 "compare_and_write": false, 00:29:45.889 "abort": false, 00:29:45.889 "seek_hole": true, 00:29:45.889 "seek_data": true, 00:29:45.889 "copy": false, 00:29:45.889 "nvme_iov_md": false 00:29:45.889 }, 00:29:45.889 "driver_specific": { 00:29:45.889 "lvol": { 00:29:45.889 "lvol_store_uuid": "0060746d-cc62-493a-9b78-fc57d8a9042e", 00:29:45.889 "base_bdev": "aio_bdev", 00:29:45.889 "thin_provision": false, 00:29:45.889 "num_allocated_clusters": 38, 00:29:45.889 "snapshot": false, 00:29:45.889 "clone": false, 00:29:45.889 "esnap_clone": false 00:29:45.889 } 00:29:45.889 } 00:29:45.889 } 00:29:45.889 ] 00:29:45.889 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:45.889 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0060746d-cc62-493a-9b78-fc57d8a9042e 00:29:45.889 08:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:46.148 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:46.148 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0060746d-cc62-493a-9b78-fc57d8a9042e 00:29:46.148 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:46.408 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:46.408 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1ba6864a-311a-4026-845a-354cfa4d750f 00:29:46.408 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0060746d-cc62-493a-9b78-fc57d8a9042e 00:29:46.668 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:46.927 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:46.927 00:29:46.927 real 0m15.689s 00:29:46.927 user 0m15.255s 00:29:46.927 sys 0m1.404s 00:29:46.927 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:46.927 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:46.927 ************************************ 00:29:46.927 END TEST lvs_grow_clean 00:29:46.927 ************************************ 00:29:46.927 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:46.927 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:46.927 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:46.927 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:46.927 ************************************ 00:29:46.927 START TEST lvs_grow_dirty 00:29:46.927 ************************************ 00:29:46.927 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:46.927 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:46.927 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:46.927 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:46.927 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:46.927 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:46.927 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:46.927 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:46.927 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:46.928 08:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:47.187 08:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:47.187 08:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:47.446 08:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=932fe162-cb98-4667-a27e-084d900d3409 00:29:47.446 08:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 932fe162-cb98-4667-a27e-084d900d3409 00:29:47.446 08:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:47.705 08:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:47.705 08:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:47.705 08:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 932fe162-cb98-4667-a27e-084d900d3409 lvol 150 00:29:47.963 08:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1a37128e-a271-490e-b57d-067d819c0c50 00:29:47.964 08:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:47.964 08:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:47.964 [2024-11-27 08:12:41.989780] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:47.964 [2024-11-27 08:12:41.989916] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:47.964 true 00:29:47.964 08:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:47.964 08:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 932fe162-cb98-4667-a27e-084d900d3409 00:29:48.222 08:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:48.222 08:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:48.481 08:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1a37128e-a271-490e-b57d-067d819c0c50 00:29:48.481 08:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:48.739 [2024-11-27 08:12:42.758079] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.739 08:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:48.998 08:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2644518 00:29:48.998 08:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:48.998 08:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2644518 /var/tmp/bdevperf.sock 00:29:48.999 08:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:48.999 08:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2644518 ']' 00:29:48.999 08:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:48.999 08:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.999 08:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:48.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:48.999 08:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.999 08:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:48.999 [2024-11-27 08:12:42.995266] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:29:48.999 [2024-11-27 08:12:42.995317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2644518 ] 00:29:48.999 [2024-11-27 08:12:43.057008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.999 [2024-11-27 08:12:43.100177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.258 08:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:49.258 08:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:49.258 08:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:49.516 Nvme0n1 00:29:49.516 08:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:49.774 [ 00:29:49.774 { 00:29:49.774 "name": "Nvme0n1", 00:29:49.774 "aliases": [ 00:29:49.774 "1a37128e-a271-490e-b57d-067d819c0c50" 00:29:49.774 ], 00:29:49.774 "product_name": "NVMe disk", 00:29:49.774 "block_size": 4096, 00:29:49.774 "num_blocks": 38912, 00:29:49.774 "uuid": "1a37128e-a271-490e-b57d-067d819c0c50", 00:29:49.774 "numa_id": 1, 00:29:49.774 "assigned_rate_limits": { 00:29:49.774 "rw_ios_per_sec": 0, 00:29:49.774 "rw_mbytes_per_sec": 0, 00:29:49.774 "r_mbytes_per_sec": 0, 00:29:49.774 "w_mbytes_per_sec": 0 00:29:49.774 }, 00:29:49.774 "claimed": false, 00:29:49.774 "zoned": false, 00:29:49.774 "supported_io_types": { 00:29:49.774 "read": true, 00:29:49.774 "write": true, 00:29:49.774 "unmap": true, 00:29:49.775 "flush": true, 00:29:49.775 "reset": true, 00:29:49.775 "nvme_admin": true, 00:29:49.775 "nvme_io": true, 00:29:49.775 "nvme_io_md": false, 00:29:49.775 "write_zeroes": true, 00:29:49.775 "zcopy": false, 00:29:49.775 "get_zone_info": false, 00:29:49.775 "zone_management": false, 00:29:49.775 "zone_append": false, 00:29:49.775 "compare": true, 00:29:49.775 "compare_and_write": true, 00:29:49.775 "abort": true, 00:29:49.775 "seek_hole": false, 00:29:49.775 "seek_data": false, 00:29:49.775 "copy": true, 00:29:49.775 "nvme_iov_md": false 00:29:49.775 }, 00:29:49.775 "memory_domains": [ 00:29:49.775 { 00:29:49.775 "dma_device_id": "system", 00:29:49.775 "dma_device_type": 1 00:29:49.775 } 00:29:49.775 ], 00:29:49.775 "driver_specific": { 00:29:49.775 "nvme": [ 00:29:49.775 { 00:29:49.775 "trid": { 00:29:49.775 "trtype": "TCP", 00:29:49.775 "adrfam": "IPv4", 00:29:49.775 "traddr": "10.0.0.2", 00:29:49.775 "trsvcid": "4420", 00:29:49.775 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:49.775 }, 00:29:49.775 "ctrlr_data": { 00:29:49.775 "cntlid": 1, 00:29:49.775 "vendor_id": "0x8086", 00:29:49.775 "model_number": "SPDK bdev Controller", 00:29:49.775 "serial_number": "SPDK0", 00:29:49.775 "firmware_revision": "25.01", 00:29:49.775 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:49.775 "oacs": { 00:29:49.775 "security": 0, 00:29:49.775 "format": 0, 00:29:49.775 "firmware": 0, 00:29:49.775 "ns_manage": 0 00:29:49.775 }, 00:29:49.775 "multi_ctrlr": true, 00:29:49.775 "ana_reporting": false 00:29:49.775 }, 00:29:49.775 "vs": { 00:29:49.775 "nvme_version": "1.3" 00:29:49.775 }, 00:29:49.775 "ns_data": { 00:29:49.775 "id": 1, 00:29:49.775 "can_share": true 00:29:49.775 } 00:29:49.775 } 00:29:49.775 ], 00:29:49.775 "mp_policy": "active_passive" 00:29:49.775 } 00:29:49.775 } 00:29:49.775 ] 00:29:49.775 08:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2644535 00:29:49.775 08:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:49.775 08:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:49.775 Running I/O for 10 seconds... 00:29:51.152 Latency(us) 00:29:51.152 [2024-11-27T07:12:45.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:51.152 Nvme0n1 : 1.00 21861.00 85.39 0.00 0.00 0.00 0.00 0.00 00:29:51.152 [2024-11-27T07:12:45.261Z] =================================================================================================================== 00:29:51.152 [2024-11-27T07:12:45.261Z] Total : 21861.00 85.39 0.00 0.00 0.00 0.00 0.00 00:29:51.152 00:29:51.852 08:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 932fe162-cb98-4667-a27e-084d900d3409 00:29:51.852 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:51.852 Nvme0n1 : 2.00 22043.00 86.11 0.00 0.00 0.00 0.00 0.00 00:29:51.852 [2024-11-27T07:12:45.961Z] =================================================================================================================== 00:29:51.852 [2024-11-27T07:12:45.961Z] Total : 22043.00 86.11 0.00 0.00 0.00 0.00 0.00 00:29:51.852 00:29:52.141 true 00:29:52.141 08:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 932fe162-cb98-4667-a27e-084d900d3409 00:29:52.141 08:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:52.141 08:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:52.141 08:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:52.141 08:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2644535 00:29:53.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:53.115 Nvme0n1 : 3.00 22061.33 86.18 0.00 0.00 0.00 0.00 0.00 00:29:53.115 [2024-11-27T07:12:47.225Z] =================================================================================================================== 00:29:53.116 [2024-11-27T07:12:47.225Z] Total : 22061.33 86.18 0.00 0.00 0.00 0.00 0.00 00:29:53.116 00:29:54.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:54.050 Nvme0n1 : 4.00 22134.00 86.46 0.00 0.00 0.00 0.00 0.00 00:29:54.050 [2024-11-27T07:12:48.159Z] =================================================================================================================== 00:29:54.050 [2024-11-27T07:12:48.159Z] Total : 22134.00 86.46 0.00 0.00 0.00 0.00 0.00 00:29:54.050 00:29:54.984 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:54.984 Nvme0n1 : 5.00 22177.60 86.63 0.00 0.00 0.00 0.00 0.00 00:29:54.984 [2024-11-27T07:12:49.093Z] =================================================================================================================== 00:29:54.984 [2024-11-27T07:12:49.094Z] Total : 22177.60 86.63 0.00 0.00 0.00 0.00 0.00 00:29:54.985 00:29:55.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:55.918 Nvme0n1 : 6.00 22227.83 86.83 0.00 0.00 0.00 0.00 0.00 00:29:55.918 [2024-11-27T07:12:50.027Z] =================================================================================================================== 00:29:55.918 [2024-11-27T07:12:50.027Z] Total : 22227.83 86.83 0.00 0.00 0.00 0.00 0.00 00:29:55.918 00:29:56.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:56.853 Nvme0n1 : 7.00 22193.57 86.69 0.00 0.00 0.00 0.00 0.00 00:29:56.853 [2024-11-27T07:12:50.962Z] =================================================================================================================== 00:29:56.853 [2024-11-27T07:12:50.962Z] Total : 22193.57 86.69 0.00 0.00 0.00 0.00 0.00 00:29:56.853 00:29:57.788 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:57.788 Nvme0n1 : 8.00 22213.38 86.77 0.00 0.00 0.00 0.00 0.00 00:29:57.788 [2024-11-27T07:12:51.897Z] =================================================================================================================== 00:29:57.788 [2024-11-27T07:12:51.897Z] Total : 22213.38 86.77 0.00 0.00 0.00 0.00 0.00 00:29:57.788 00:29:59.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:59.163 Nvme0n1 : 9.00 22242.89 86.89 0.00 0.00 0.00 0.00 0.00 00:29:59.163 [2024-11-27T07:12:53.272Z] =================================================================================================================== 00:29:59.163 [2024-11-27T07:12:53.272Z] Total : 22242.89 86.89 0.00 0.00 0.00 0.00 0.00 00:29:59.163 00:30:00.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:00.096 Nvme0n1 : 10.00 22266.50 86.98 0.00 0.00 0.00 0.00 0.00 00:30:00.096 [2024-11-27T07:12:54.205Z] =================================================================================================================== 00:30:00.096 [2024-11-27T07:12:54.205Z] Total : 22266.50 86.98 0.00 0.00 0.00 0.00 0.00 00:30:00.096 00:30:00.096 00:30:00.096 Latency(us) 00:30:00.096 [2024-11-27T07:12:54.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:00.096 Nvme0n1 : 10.00 22269.05 86.99 0.00 0.00 5744.69 3319.54 15614.66 00:30:00.096 [2024-11-27T07:12:54.205Z] =================================================================================================================== 00:30:00.096 [2024-11-27T07:12:54.205Z] Total : 22269.05 86.99 0.00 0.00 5744.69 3319.54 15614.66 00:30:00.096 { 00:30:00.096 "results": [ 00:30:00.096 { 00:30:00.096 "job": "Nvme0n1", 00:30:00.096 "core_mask": "0x2", 00:30:00.096 "workload": "randwrite", 00:30:00.096 "status": "finished", 00:30:00.096 "queue_depth": 128, 00:30:00.096 "io_size": 4096, 00:30:00.096 "runtime": 10.004602, 00:30:00.096 "iops": 22269.051782369752, 00:30:00.096 "mibps": 86.98848352488184, 00:30:00.096 "io_failed": 0, 00:30:00.096 "io_timeout": 0, 00:30:00.096 "avg_latency_us": 5744.687130978863, 00:30:00.096 "min_latency_us": 3319.5408695652172, 00:30:00.096 "max_latency_us": 15614.664347826087 00:30:00.096 } 00:30:00.096 ], 00:30:00.096 "core_count": 1 00:30:00.096 } 00:30:00.096 08:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2644518 00:30:00.096 08:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2644518 ']' 00:30:00.096 08:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2644518 00:30:00.096 08:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:00.096 08:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:00.096 08:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2644518 00:30:00.096 08:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:00.096 08:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:00.096 08:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2644518' 00:30:00.096 killing process with pid 2644518 00:30:00.096 08:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2644518 00:30:00.096 Received shutdown signal, test time was about 10.000000 seconds 00:30:00.096 00:30:00.096 Latency(us) 00:30:00.096 [2024-11-27T07:12:54.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.096 [2024-11-27T07:12:54.205Z] =================================================================================================================== 00:30:00.096 [2024-11-27T07:12:54.205Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:00.096 08:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2644518 00:30:00.096 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:00.354 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:00.612 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 932fe162-cb98-4667-a27e-084d900d3409 00:30:00.613 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:00.613 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:00.613 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:00.613 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2641449 00:30:00.613 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2641449 00:30:00.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2641449 Killed "${NVMF_APP[@]}" "$@" 00:30:00.871 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:00.871 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:00.871 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:00.871 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:00.871 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:00.871 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2646376 00:30:00.871 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2646376 00:30:00.871 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2646376 ']' 00:30:00.871 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.871 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:00.871 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.871 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:00.871 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:00.871 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:00.871 [2024-11-27 08:12:54.796202] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:00.871 [2024-11-27 08:12:54.797132] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:30:00.871 [2024-11-27 08:12:54.797168] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.871 [2024-11-27 08:12:54.862459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.871 [2024-11-27 08:12:54.903432] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.871 [2024-11-27 08:12:54.903468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.871 [2024-11-27 08:12:54.903475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.871 [2024-11-27 08:12:54.903482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.871 [2024-11-27 08:12:54.903487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.871 [2024-11-27 08:12:54.904010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.871 [2024-11-27 08:12:54.972992] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:00.871 [2024-11-27 08:12:54.973201] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:01.129 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:01.129 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:01.129 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:01.129 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:01.129 08:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:01.129 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.129 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:01.129 [2024-11-27 08:12:55.207093] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:01.129 [2024-11-27 08:12:55.207203] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:01.129 [2024-11-27 08:12:55.207242] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:01.129 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:01.129 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1a37128e-a271-490e-b57d-067d819c0c50 00:30:01.129 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1a37128e-a271-490e-b57d-067d819c0c50 00:30:01.129 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:01.129 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:01.129 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:01.129 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:01.129 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:01.388 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1a37128e-a271-490e-b57d-067d819c0c50 -t 2000 00:30:01.646 [ 00:30:01.646 { 00:30:01.646 "name": "1a37128e-a271-490e-b57d-067d819c0c50", 00:30:01.646 "aliases": [ 00:30:01.646 "lvs/lvol" 00:30:01.646 ], 00:30:01.646 "product_name": "Logical Volume", 00:30:01.647 "block_size": 4096, 00:30:01.647 "num_blocks": 38912, 00:30:01.647 "uuid": "1a37128e-a271-490e-b57d-067d819c0c50", 00:30:01.647 "assigned_rate_limits": { 00:30:01.647 "rw_ios_per_sec": 0, 00:30:01.647 "rw_mbytes_per_sec": 0, 00:30:01.647 "r_mbytes_per_sec": 0, 00:30:01.647 "w_mbytes_per_sec": 0 00:30:01.647 }, 00:30:01.647 "claimed": false, 00:30:01.647 "zoned": false, 00:30:01.647 "supported_io_types": { 00:30:01.647 "read": true, 00:30:01.647 "write": true, 00:30:01.647 "unmap": true, 00:30:01.647 "flush": false, 00:30:01.647 "reset": true, 00:30:01.647 "nvme_admin": false, 00:30:01.647 "nvme_io": false, 00:30:01.647 "nvme_io_md": false, 00:30:01.647 "write_zeroes": true, 00:30:01.647 "zcopy": false, 00:30:01.647 "get_zone_info": false, 00:30:01.647 "zone_management": false, 00:30:01.647 "zone_append": false, 00:30:01.647 "compare": false, 00:30:01.647 "compare_and_write": false, 00:30:01.647 "abort": false, 00:30:01.647 "seek_hole": true, 00:30:01.647 "seek_data": true, 00:30:01.647 "copy": false, 00:30:01.647 "nvme_iov_md": false 00:30:01.647 }, 00:30:01.647 "driver_specific": { 00:30:01.647 "lvol": { 00:30:01.647 "lvol_store_uuid": "932fe162-cb98-4667-a27e-084d900d3409", 00:30:01.647 "base_bdev": "aio_bdev", 00:30:01.647 "thin_provision": false, 00:30:01.647 "num_allocated_clusters": 38, 00:30:01.647 "snapshot": false, 00:30:01.647 "clone": false, 00:30:01.647 "esnap_clone": false 00:30:01.647 } 00:30:01.647 } 00:30:01.647 } 00:30:01.647 ] 00:30:01.647 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:01.647 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 932fe162-cb98-4667-a27e-084d900d3409 00:30:01.647 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:01.905 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:01.905 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 932fe162-cb98-4667-a27e-084d900d3409 00:30:01.905 08:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:02.163 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:02.163 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:02.163 [2024-11-27 08:12:56.200472] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:02.163 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 932fe162-cb98-4667-a27e-084d900d3409 00:30:02.163 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:02.163 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 932fe162-cb98-4667-a27e-084d900d3409 00:30:02.163 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:02.163 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:02.163 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:02.163 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:02.163 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:02.163 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:02.163 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:02.163 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:02.163 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 932fe162-cb98-4667-a27e-084d900d3409 00:30:02.420 request: 00:30:02.420 { 00:30:02.420 "uuid": "932fe162-cb98-4667-a27e-084d900d3409", 00:30:02.420 "method": "bdev_lvol_get_lvstores", 00:30:02.420 "req_id": 1 00:30:02.420 } 00:30:02.420 Got JSON-RPC error response 00:30:02.420 response: 00:30:02.420 { 00:30:02.420 "code": -19, 00:30:02.420 "message": "No such device" 00:30:02.420 } 00:30:02.420 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:02.420 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:02.420 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:02.420 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:02.420 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:02.678 aio_bdev 00:30:02.678 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1a37128e-a271-490e-b57d-067d819c0c50 00:30:02.678 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1a37128e-a271-490e-b57d-067d819c0c50 00:30:02.678 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:02.678 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:02.678 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:02.678 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:02.678 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:02.935 08:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1a37128e-a271-490e-b57d-067d819c0c50 -t 2000 00:30:02.935 [ 00:30:02.935 { 00:30:02.935 "name": "1a37128e-a271-490e-b57d-067d819c0c50", 00:30:02.935 "aliases": [ 00:30:02.935 "lvs/lvol" 00:30:02.935 ], 00:30:02.935 "product_name": "Logical Volume", 00:30:02.935 "block_size": 4096, 00:30:02.935 "num_blocks": 38912, 00:30:02.935 "uuid": "1a37128e-a271-490e-b57d-067d819c0c50", 00:30:02.935 "assigned_rate_limits": { 00:30:02.935 "rw_ios_per_sec": 0, 00:30:02.935 "rw_mbytes_per_sec": 0, 00:30:02.935 "r_mbytes_per_sec": 0, 00:30:02.935 "w_mbytes_per_sec": 0 00:30:02.935 }, 00:30:02.935 "claimed": false, 00:30:02.935 "zoned": false, 00:30:02.935 "supported_io_types": { 00:30:02.935 "read": true, 00:30:02.935 "write": true, 00:30:02.935 "unmap": true, 00:30:02.935 "flush": false, 00:30:02.935 "reset": true, 00:30:02.935 "nvme_admin": false, 00:30:02.935 "nvme_io": false, 00:30:02.935 "nvme_io_md": false, 00:30:02.935 "write_zeroes": true, 00:30:02.935 "zcopy": false, 00:30:02.935 "get_zone_info": false, 00:30:02.935 "zone_management": false, 00:30:02.935 "zone_append": false, 00:30:02.935 "compare": false, 00:30:02.935 "compare_and_write": false, 00:30:02.935 "abort": false, 00:30:02.936 "seek_hole": true, 00:30:02.936 "seek_data": true, 00:30:02.936 "copy": false, 00:30:02.936 "nvme_iov_md": false 00:30:02.936 }, 00:30:02.936 "driver_specific": { 00:30:02.936 "lvol": { 00:30:02.936 "lvol_store_uuid": "932fe162-cb98-4667-a27e-084d900d3409", 00:30:02.936 "base_bdev": "aio_bdev", 00:30:02.936 "thin_provision": false, 00:30:02.936 "num_allocated_clusters": 38, 00:30:02.936 "snapshot": false, 00:30:02.936 "clone": false, 00:30:02.936 "esnap_clone": false 00:30:02.936 } 00:30:02.936 } 00:30:02.936 } 00:30:02.936 ] 00:30:02.936 08:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:02.936 08:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 932fe162-cb98-4667-a27e-084d900d3409 00:30:02.936 08:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:03.193 08:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:03.193 08:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 932fe162-cb98-4667-a27e-084d900d3409 00:30:03.193 08:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:03.451 08:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:03.451 08:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1a37128e-a271-490e-b57d-067d819c0c50 00:30:03.709 08:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 932fe162-cb98-4667-a27e-084d900d3409 00:30:03.709 08:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:03.966 08:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:03.966 00:30:03.966 real 0m17.027s 00:30:03.966 user 0m34.539s 00:30:03.966 sys 0m3.667s 00:30:03.966 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:03.966 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:03.966 ************************************ 00:30:03.966 END TEST lvs_grow_dirty 00:30:03.966 ************************************ 00:30:03.966 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:03.966 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:03.966 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:03.966 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:03.966 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:03.966 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:03.966 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:03.966 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:03.966 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:03.966 nvmf_trace.0 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:04.224 rmmod nvme_tcp 00:30:04.224 rmmod nvme_fabrics 00:30:04.224 rmmod nvme_keyring 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2646376 ']' 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2646376 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2646376 ']' 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2646376 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2646376 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2646376' 00:30:04.224 killing process with pid 2646376 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2646376 00:30:04.224 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2646376 00:30:04.482 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:04.482 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:04.482 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:04.482 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:04.482 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:04.482 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:04.482 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:04.482 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:04.482 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:04.482 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.482 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.482 08:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.381 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:06.381 00:30:06.381 real 0m40.961s 00:30:06.381 user 0m51.861s 00:30:06.381 sys 0m9.255s 00:30:06.381 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.381 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:06.381 ************************************ 00:30:06.381 END TEST nvmf_lvs_grow 00:30:06.381 ************************************ 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:06.644 ************************************ 00:30:06.644 START TEST nvmf_bdev_io_wait 00:30:06.644 ************************************ 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:06.644 * Looking for test storage... 00:30:06.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:06.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.644 --rc genhtml_branch_coverage=1 00:30:06.644 --rc genhtml_function_coverage=1 00:30:06.644 --rc genhtml_legend=1 00:30:06.644 --rc geninfo_all_blocks=1 00:30:06.644 --rc geninfo_unexecuted_blocks=1 00:30:06.644 00:30:06.644 ' 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:06.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.644 --rc genhtml_branch_coverage=1 00:30:06.644 --rc genhtml_function_coverage=1 00:30:06.644 --rc genhtml_legend=1 00:30:06.644 --rc geninfo_all_blocks=1 00:30:06.644 --rc geninfo_unexecuted_blocks=1 00:30:06.644 00:30:06.644 ' 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:06.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.644 --rc genhtml_branch_coverage=1 00:30:06.644 --rc genhtml_function_coverage=1 00:30:06.644 --rc genhtml_legend=1 00:30:06.644 --rc geninfo_all_blocks=1 00:30:06.644 --rc geninfo_unexecuted_blocks=1 00:30:06.644 00:30:06.644 ' 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:06.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.644 --rc genhtml_branch_coverage=1 00:30:06.644 --rc genhtml_function_coverage=1 00:30:06.644 --rc genhtml_legend=1 00:30:06.644 --rc geninfo_all_blocks=1 00:30:06.644 --rc geninfo_unexecuted_blocks=1 00:30:06.644 00:30:06.644 ' 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.644 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:06.645 08:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:11.907 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:11.908 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:11.908 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:11.908 Found net devices under 0000:86:00.0: cvl_0_0 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:11.908 Found net devices under 0000:86:00.1: cvl_0_1 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:11.908 08:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:12.166 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:12.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:30:12.167 00:30:12.167 --- 10.0.0.2 ping statistics --- 00:30:12.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.167 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:12.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:30:12.167 00:30:12.167 --- 10.0.0.1 ping statistics --- 00:30:12.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.167 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2650419 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2650419 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2650419 ']' 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:12.167 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.167 [2024-11-27 08:13:06.237062] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:12.167 [2024-11-27 08:13:06.238003] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:30:12.167 [2024-11-27 08:13:06.238038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.426 [2024-11-27 08:13:06.305258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:12.426 [2024-11-27 08:13:06.349074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.426 [2024-11-27 08:13:06.349113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.426 [2024-11-27 08:13:06.349120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.426 [2024-11-27 08:13:06.349127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.426 [2024-11-27 08:13:06.349133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.426 [2024-11-27 08:13:06.350641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.426 [2024-11-27 08:13:06.350740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:12.426 [2024-11-27 08:13:06.350827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:12.426 [2024-11-27 08:13:06.350829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.426 [2024-11-27 08:13:06.351144] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.426 [2024-11-27 08:13:06.498118] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:12.426 [2024-11-27 08:13:06.498192] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:12.426 [2024-11-27 08:13:06.498718] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:12.426 [2024-11-27 08:13:06.499186] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.426 [2024-11-27 08:13:06.511516] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.426 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.685 Malloc0 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.685 [2024-11-27 08:13:06.567436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2650444 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2650446 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:12.685 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:12.685 { 00:30:12.685 "params": { 00:30:12.685 "name": "Nvme$subsystem", 00:30:12.685 "trtype": "$TEST_TRANSPORT", 00:30:12.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.685 "adrfam": "ipv4", 00:30:12.685 "trsvcid": "$NVMF_PORT", 00:30:12.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.685 "hdgst": ${hdgst:-false}, 00:30:12.685 "ddgst": ${ddgst:-false} 00:30:12.685 }, 00:30:12.685 "method": "bdev_nvme_attach_controller" 00:30:12.685 } 00:30:12.685 EOF 00:30:12.685 )") 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2650448 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:12.686 { 00:30:12.686 "params": { 00:30:12.686 "name": "Nvme$subsystem", 00:30:12.686 "trtype": "$TEST_TRANSPORT", 00:30:12.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.686 "adrfam": "ipv4", 00:30:12.686 "trsvcid": "$NVMF_PORT", 00:30:12.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.686 "hdgst": ${hdgst:-false}, 00:30:12.686 "ddgst": ${ddgst:-false} 00:30:12.686 }, 00:30:12.686 "method": "bdev_nvme_attach_controller" 00:30:12.686 } 00:30:12.686 EOF 00:30:12.686 )") 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2650451 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:12.686 { 00:30:12.686 "params": { 00:30:12.686 "name": "Nvme$subsystem", 00:30:12.686 "trtype": "$TEST_TRANSPORT", 00:30:12.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.686 "adrfam": "ipv4", 00:30:12.686 "trsvcid": "$NVMF_PORT", 00:30:12.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.686 "hdgst": ${hdgst:-false}, 00:30:12.686 "ddgst": ${ddgst:-false} 00:30:12.686 }, 00:30:12.686 "method": "bdev_nvme_attach_controller" 00:30:12.686 } 00:30:12.686 EOF 00:30:12.686 )") 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:12.686 { 00:30:12.686 "params": { 00:30:12.686 "name": "Nvme$subsystem", 00:30:12.686 "trtype": "$TEST_TRANSPORT", 00:30:12.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.686 "adrfam": "ipv4", 00:30:12.686 "trsvcid": "$NVMF_PORT", 00:30:12.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.686 "hdgst": ${hdgst:-false}, 00:30:12.686 "ddgst": ${ddgst:-false} 00:30:12.686 }, 00:30:12.686 "method": "bdev_nvme_attach_controller" 00:30:12.686 } 00:30:12.686 EOF 00:30:12.686 )") 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2650444 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:12.686 "params": { 00:30:12.686 "name": "Nvme1", 00:30:12.686 "trtype": "tcp", 00:30:12.686 "traddr": "10.0.0.2", 00:30:12.686 "adrfam": "ipv4", 00:30:12.686 "trsvcid": "4420", 00:30:12.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:12.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:12.686 "hdgst": false, 00:30:12.686 "ddgst": false 00:30:12.686 }, 00:30:12.686 "method": "bdev_nvme_attach_controller" 00:30:12.686 }' 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:12.686 "params": { 00:30:12.686 "name": "Nvme1", 00:30:12.686 "trtype": "tcp", 00:30:12.686 "traddr": "10.0.0.2", 00:30:12.686 "adrfam": "ipv4", 00:30:12.686 "trsvcid": "4420", 00:30:12.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:12.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:12.686 "hdgst": false, 00:30:12.686 "ddgst": false 00:30:12.686 }, 00:30:12.686 "method": "bdev_nvme_attach_controller" 00:30:12.686 }' 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:12.686 "params": { 00:30:12.686 "name": "Nvme1", 00:30:12.686 "trtype": "tcp", 00:30:12.686 "traddr": "10.0.0.2", 00:30:12.686 "adrfam": "ipv4", 00:30:12.686 "trsvcid": "4420", 00:30:12.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:12.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:12.686 "hdgst": false, 00:30:12.686 "ddgst": false 00:30:12.686 }, 00:30:12.686 "method": "bdev_nvme_attach_controller" 00:30:12.686 }' 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:12.686 08:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:12.686 "params": { 00:30:12.686 "name": "Nvme1", 00:30:12.686 "trtype": "tcp", 00:30:12.686 "traddr": "10.0.0.2", 00:30:12.686 "adrfam": "ipv4", 00:30:12.686 "trsvcid": "4420", 00:30:12.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:12.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:12.686 "hdgst": false, 00:30:12.686 "ddgst": false 00:30:12.686 }, 00:30:12.686 "method": "bdev_nvme_attach_controller" 00:30:12.686 }' 00:30:12.686 [2024-11-27 08:13:06.618848] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:30:12.686 [2024-11-27 08:13:06.618899] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:12.686 [2024-11-27 08:13:06.619514] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:30:12.686 [2024-11-27 08:13:06.619555] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:12.686 [2024-11-27 08:13:06.619587] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:30:12.686 [2024-11-27 08:13:06.619630] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:12.686 [2024-11-27 08:13:06.624315] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:30:12.686 [2024-11-27 08:13:06.624356] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:12.945 [2024-11-27 08:13:06.813336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.945 [2024-11-27 08:13:06.856520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:12.945 [2024-11-27 08:13:06.905701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.945 [2024-11-27 08:13:06.958538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:12.945 [2024-11-27 08:13:06.964996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.945 [2024-11-27 08:13:07.007857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:12.945 [2024-11-27 08:13:07.022784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.203 [2024-11-27 08:13:07.065920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:13.203 Running I/O for 1 seconds... 00:30:13.203 Running I/O for 1 seconds... 00:30:13.203 Running I/O for 1 seconds... 00:30:13.203 Running I/O for 1 seconds... 00:30:14.579 11981.00 IOPS, 46.80 MiB/s 00:30:14.579 Latency(us) 00:30:14.579 [2024-11-27T07:13:08.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.579 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:14.579 Nvme1n1 : 1.01 12027.89 46.98 0.00 0.00 10603.82 3462.01 12081.42 00:30:14.579 [2024-11-27T07:13:08.688Z] =================================================================================================================== 00:30:14.579 [2024-11-27T07:13:08.688Z] Total : 12027.89 46.98 0.00 0.00 10603.82 3462.01 12081.42 00:30:14.579 238312.00 IOPS, 930.91 MiB/s 00:30:14.579 Latency(us) 00:30:14.579 [2024-11-27T07:13:08.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.579 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:14.579 Nvme1n1 : 1.00 237942.86 929.46 0.00 0.00 535.13 227.06 1531.55 00:30:14.579 [2024-11-27T07:13:08.688Z] =================================================================================================================== 00:30:14.579 [2024-11-27T07:13:08.688Z] Total : 237942.86 929.46 0.00 0.00 535.13 227.06 1531.55 00:30:14.579 9951.00 IOPS, 38.87 MiB/s 00:30:14.579 Latency(us) 00:30:14.579 [2024-11-27T07:13:08.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.579 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:14.579 Nvme1n1 : 1.01 10029.43 39.18 0.00 0.00 12720.32 1538.67 14930.81 00:30:14.579 [2024-11-27T07:13:08.688Z] =================================================================================================================== 00:30:14.579 [2024-11-27T07:13:08.688Z] Total : 10029.43 39.18 0.00 0.00 12720.32 1538.67 14930.81 00:30:14.579 11026.00 IOPS, 43.07 MiB/s 00:30:14.579 Latency(us) 00:30:14.579 [2024-11-27T07:13:08.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.579 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:14.579 Nvme1n1 : 1.00 11119.99 43.44 0.00 0.00 11485.72 2721.17 17210.32 00:30:14.579 [2024-11-27T07:13:08.688Z] =================================================================================================================== 00:30:14.579 [2024-11-27T07:13:08.688Z] Total : 11119.99 43.44 0.00 0.00 11485.72 2721.17 17210.32 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2650446 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2650448 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2650451 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:14.579 rmmod nvme_tcp 00:30:14.579 rmmod nvme_fabrics 00:30:14.579 rmmod nvme_keyring 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2650419 ']' 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2650419 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2650419 ']' 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2650419 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2650419 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:14.579 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2650419' 00:30:14.579 killing process with pid 2650419 00:30:14.580 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2650419 00:30:14.580 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2650419 00:30:14.838 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:14.838 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:14.838 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:14.838 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:14.838 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:14.838 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:14.838 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:14.838 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:14.838 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:14.838 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.838 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.838 08:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.742 08:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:16.742 00:30:16.742 real 0m10.256s 00:30:16.742 user 0m15.010s 00:30:16.742 sys 0m6.187s 00:30:16.742 08:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:16.742 08:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:16.742 ************************************ 00:30:16.742 END TEST nvmf_bdev_io_wait 00:30:16.742 ************************************ 00:30:16.742 08:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:16.742 08:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:16.742 08:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:16.742 08:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:17.001 ************************************ 00:30:17.001 START TEST nvmf_queue_depth 00:30:17.001 ************************************ 00:30:17.001 08:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:17.001 * Looking for test storage... 00:30:17.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:17.001 08:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:17.001 08:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:30:17.001 08:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:17.001 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:17.001 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:17.001 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:17.001 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:17.001 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:17.001 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:17.001 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:17.001 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:17.001 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:17.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.002 --rc genhtml_branch_coverage=1 00:30:17.002 --rc genhtml_function_coverage=1 00:30:17.002 --rc genhtml_legend=1 00:30:17.002 --rc geninfo_all_blocks=1 00:30:17.002 --rc geninfo_unexecuted_blocks=1 00:30:17.002 00:30:17.002 ' 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:17.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.002 --rc genhtml_branch_coverage=1 00:30:17.002 --rc genhtml_function_coverage=1 00:30:17.002 --rc genhtml_legend=1 00:30:17.002 --rc geninfo_all_blocks=1 00:30:17.002 --rc geninfo_unexecuted_blocks=1 00:30:17.002 00:30:17.002 ' 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:17.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.002 --rc genhtml_branch_coverage=1 00:30:17.002 --rc genhtml_function_coverage=1 00:30:17.002 --rc genhtml_legend=1 00:30:17.002 --rc geninfo_all_blocks=1 00:30:17.002 --rc geninfo_unexecuted_blocks=1 00:30:17.002 00:30:17.002 ' 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:17.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.002 --rc genhtml_branch_coverage=1 00:30:17.002 --rc genhtml_function_coverage=1 00:30:17.002 --rc genhtml_legend=1 00:30:17.002 --rc geninfo_all_blocks=1 00:30:17.002 --rc geninfo_unexecuted_blocks=1 00:30:17.002 00:30:17.002 ' 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:17.002 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:17.003 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:17.003 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.003 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.003 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.003 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:17.003 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:17.003 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:17.003 08:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:22.272 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:22.273 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:22.273 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:22.273 Found net devices under 0000:86:00.0: cvl_0_0 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:22.273 Found net devices under 0000:86:00.1: cvl_0_1 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:22.273 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:22.274 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:22.274 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:22.274 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:22.274 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:22.274 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:22.274 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:22.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:22.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:30:22.533 00:30:22.533 --- 10.0.0.2 ping statistics --- 00:30:22.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.533 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:22.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:22.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:30:22.533 00:30:22.533 --- 10.0.0.1 ping statistics --- 00:30:22.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.533 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2654220 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2654220 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2654220 ']' 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.533 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:22.792 [2024-11-27 08:13:16.656056] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:22.792 [2024-11-27 08:13:16.657007] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:30:22.792 [2024-11-27 08:13:16.657043] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.792 [2024-11-27 08:13:16.725547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.792 [2024-11-27 08:13:16.766761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.792 [2024-11-27 08:13:16.766797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.792 [2024-11-27 08:13:16.766805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.792 [2024-11-27 08:13:16.766811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.792 [2024-11-27 08:13:16.766817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.792 [2024-11-27 08:13:16.767353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.792 [2024-11-27 08:13:16.835794] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:22.792 [2024-11-27 08:13:16.836017] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:22.792 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.792 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:22.792 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:22.792 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:22.792 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:23.051 [2024-11-27 08:13:16.912001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:23.051 Malloc0 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:23.051 [2024-11-27 08:13:16.975921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2654243 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2654243 /var/tmp/bdevperf.sock 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2654243 ']' 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:23.051 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:23.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:23.052 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:23.052 08:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:23.052 [2024-11-27 08:13:17.028007] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:30:23.052 [2024-11-27 08:13:17.028051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2654243 ] 00:30:23.052 [2024-11-27 08:13:17.091149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.052 [2024-11-27 08:13:17.133961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.310 08:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:23.310 08:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:23.310 08:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:23.310 08:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.310 08:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:23.310 NVMe0n1 00:30:23.310 08:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.310 08:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:23.310 Running I/O for 10 seconds... 00:30:25.614 11343.00 IOPS, 44.31 MiB/s [2024-11-27T07:13:20.654Z] 11773.00 IOPS, 45.99 MiB/s [2024-11-27T07:13:21.588Z] 11884.33 IOPS, 46.42 MiB/s [2024-11-27T07:13:22.522Z] 11962.50 IOPS, 46.73 MiB/s [2024-11-27T07:13:23.456Z] 11979.00 IOPS, 46.79 MiB/s [2024-11-27T07:13:24.456Z] 12000.00 IOPS, 46.88 MiB/s [2024-11-27T07:13:25.827Z] 12000.71 IOPS, 46.88 MiB/s [2024-11-27T07:13:26.759Z] 12034.38 IOPS, 47.01 MiB/s [2024-11-27T07:13:27.690Z] 12049.11 IOPS, 47.07 MiB/s [2024-11-27T07:13:27.690Z] 12070.10 IOPS, 47.15 MiB/s 00:30:33.581 Latency(us) 00:30:33.581 [2024-11-27T07:13:27.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.581 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:33.581 Verification LBA range: start 0x0 length 0x4000 00:30:33.581 NVMe0n1 : 10.07 12078.62 47.18 0.00 0.00 84484.64 19489.84 56759.87 00:30:33.581 [2024-11-27T07:13:27.690Z] =================================================================================================================== 00:30:33.581 [2024-11-27T07:13:27.690Z] Total : 12078.62 47.18 0.00 0.00 84484.64 19489.84 56759.87 00:30:33.581 { 00:30:33.581 "results": [ 00:30:33.581 { 00:30:33.581 "job": "NVMe0n1", 00:30:33.581 "core_mask": "0x1", 00:30:33.581 "workload": "verify", 00:30:33.581 "status": "finished", 00:30:33.581 "verify_range": { 00:30:33.581 "start": 0, 00:30:33.581 "length": 16384 00:30:33.581 }, 00:30:33.581 "queue_depth": 1024, 00:30:33.581 "io_size": 4096, 00:30:33.581 "runtime": 10.067873, 00:30:33.581 "iops": 12078.618790682003, 00:30:33.581 "mibps": 47.182104651101575, 00:30:33.581 "io_failed": 0, 00:30:33.581 "io_timeout": 0, 00:30:33.581 "avg_latency_us": 84484.63962461805, 00:30:33.581 "min_latency_us": 19489.83652173913, 00:30:33.581 "max_latency_us": 56759.8747826087 00:30:33.581 } 00:30:33.581 ], 00:30:33.581 "core_count": 1 00:30:33.581 } 00:30:33.581 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2654243 00:30:33.581 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2654243 ']' 00:30:33.581 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2654243 00:30:33.581 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:33.581 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:33.581 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2654243 00:30:33.581 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:33.581 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:33.581 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2654243' 00:30:33.581 killing process with pid 2654243 00:30:33.581 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2654243 00:30:33.581 Received shutdown signal, test time was about 10.000000 seconds 00:30:33.581 00:30:33.581 Latency(us) 00:30:33.581 [2024-11-27T07:13:27.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.581 [2024-11-27T07:13:27.690Z] =================================================================================================================== 00:30:33.581 [2024-11-27T07:13:27.690Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:33.581 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2654243 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:33.839 rmmod nvme_tcp 00:30:33.839 rmmod nvme_fabrics 00:30:33.839 rmmod nvme_keyring 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2654220 ']' 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2654220 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2654220 ']' 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2654220 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2654220 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2654220' 00:30:33.839 killing process with pid 2654220 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2654220 00:30:33.839 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2654220 00:30:34.097 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:34.097 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:34.097 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:34.097 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:34.097 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:34.098 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:34.098 08:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:34.098 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:34.098 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:34.098 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.098 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.098 08:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.998 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:35.998 00:30:35.998 real 0m19.194s 00:30:35.998 user 0m22.427s 00:30:35.998 sys 0m6.014s 00:30:35.998 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:35.998 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:35.998 ************************************ 00:30:35.998 END TEST nvmf_queue_depth 00:30:35.998 ************************************ 00:30:35.998 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:35.998 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:35.998 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:35.998 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:36.256 ************************************ 00:30:36.256 START TEST nvmf_target_multipath 00:30:36.256 ************************************ 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:36.256 * Looking for test storage... 00:30:36.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:36.256 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:36.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.257 --rc genhtml_branch_coverage=1 00:30:36.257 --rc genhtml_function_coverage=1 00:30:36.257 --rc genhtml_legend=1 00:30:36.257 --rc geninfo_all_blocks=1 00:30:36.257 --rc geninfo_unexecuted_blocks=1 00:30:36.257 00:30:36.257 ' 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:36.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.257 --rc genhtml_branch_coverage=1 00:30:36.257 --rc genhtml_function_coverage=1 00:30:36.257 --rc genhtml_legend=1 00:30:36.257 --rc geninfo_all_blocks=1 00:30:36.257 --rc geninfo_unexecuted_blocks=1 00:30:36.257 00:30:36.257 ' 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:36.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.257 --rc genhtml_branch_coverage=1 00:30:36.257 --rc genhtml_function_coverage=1 00:30:36.257 --rc genhtml_legend=1 00:30:36.257 --rc geninfo_all_blocks=1 00:30:36.257 --rc geninfo_unexecuted_blocks=1 00:30:36.257 00:30:36.257 ' 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:36.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.257 --rc genhtml_branch_coverage=1 00:30:36.257 --rc genhtml_function_coverage=1 00:30:36.257 --rc genhtml_legend=1 00:30:36.257 --rc geninfo_all_blocks=1 00:30:36.257 --rc geninfo_unexecuted_blocks=1 00:30:36.257 00:30:36.257 ' 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:36.257 08:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:41.525 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:41.525 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:41.525 Found net devices under 0000:86:00.0: cvl_0_0 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:41.525 Found net devices under 0000:86:00.1: cvl_0_1 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:41.525 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.526 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.526 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.526 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:41.526 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.526 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.526 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:41.526 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:41.526 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.526 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.526 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:41.526 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:41.526 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.526 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.526 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.526 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:41.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:30:41.784 00:30:41.784 --- 10.0.0.2 ping statistics --- 00:30:41.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.784 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:30:41.784 00:30:41.784 --- 10.0.0.1 ping statistics --- 00:30:41.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.784 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:41.784 only one NIC for nvmf test 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:41.784 rmmod nvme_tcp 00:30:41.784 rmmod nvme_fabrics 00:30:41.784 rmmod nvme_keyring 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.784 08:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.316 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:44.316 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:44.316 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:44.316 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:44.316 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:44.316 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:44.316 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:44.316 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:44.316 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:44.317 00:30:44.317 real 0m7.792s 00:30:44.317 user 0m1.706s 00:30:44.317 sys 0m4.105s 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:44.317 ************************************ 00:30:44.317 END TEST nvmf_target_multipath 00:30:44.317 ************************************ 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:44.317 ************************************ 00:30:44.317 START TEST nvmf_zcopy 00:30:44.317 ************************************ 00:30:44.317 08:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:44.317 * Looking for test storage... 00:30:44.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:44.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.317 --rc genhtml_branch_coverage=1 00:30:44.317 --rc genhtml_function_coverage=1 00:30:44.317 --rc genhtml_legend=1 00:30:44.317 --rc geninfo_all_blocks=1 00:30:44.317 --rc geninfo_unexecuted_blocks=1 00:30:44.317 00:30:44.317 ' 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:44.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.317 --rc genhtml_branch_coverage=1 00:30:44.317 --rc genhtml_function_coverage=1 00:30:44.317 --rc genhtml_legend=1 00:30:44.317 --rc geninfo_all_blocks=1 00:30:44.317 --rc geninfo_unexecuted_blocks=1 00:30:44.317 00:30:44.317 ' 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:44.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.317 --rc genhtml_branch_coverage=1 00:30:44.317 --rc genhtml_function_coverage=1 00:30:44.317 --rc genhtml_legend=1 00:30:44.317 --rc geninfo_all_blocks=1 00:30:44.317 --rc geninfo_unexecuted_blocks=1 00:30:44.317 00:30:44.317 ' 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:44.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.317 --rc genhtml_branch_coverage=1 00:30:44.317 --rc genhtml_function_coverage=1 00:30:44.317 --rc genhtml_legend=1 00:30:44.317 --rc geninfo_all_blocks=1 00:30:44.317 --rc geninfo_unexecuted_blocks=1 00:30:44.317 00:30:44.317 ' 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:44.317 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:44.318 08:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:49.593 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:49.593 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:49.593 Found net devices under 0000:86:00.0: cvl_0_0 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:49.593 Found net devices under 0000:86:00.1: cvl_0_1 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:49.593 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:49.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:30:49.593 00:30:49.594 --- 10.0.0.2 ping statistics --- 00:30:49.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.594 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:30:49.594 00:30:49.594 --- 10.0.0.1 ping statistics --- 00:30:49.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.594 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2662885 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2662885 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2662885 ']' 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:49.594 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:49.594 [2024-11-27 08:13:43.677433] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:49.594 [2024-11-27 08:13:43.678371] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:30:49.594 [2024-11-27 08:13:43.678407] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.852 [2024-11-27 08:13:43.744755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.852 [2024-11-27 08:13:43.785603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.852 [2024-11-27 08:13:43.785638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.852 [2024-11-27 08:13:43.785645] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:49.852 [2024-11-27 08:13:43.785651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:49.852 [2024-11-27 08:13:43.785657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.852 [2024-11-27 08:13:43.786183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.852 [2024-11-27 08:13:43.853419] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:49.852 [2024-11-27 08:13:43.853651] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:49.852 [2024-11-27 08:13:43.914808] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:49.852 [2024-11-27 08:13:43.938976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.852 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.110 malloc0 00:30:50.110 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.110 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:50.110 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.110 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.110 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.110 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:50.110 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:50.110 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:50.110 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:50.110 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:50.110 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:50.110 { 00:30:50.110 "params": { 00:30:50.110 "name": "Nvme$subsystem", 00:30:50.110 "trtype": "$TEST_TRANSPORT", 00:30:50.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.110 "adrfam": "ipv4", 00:30:50.110 "trsvcid": "$NVMF_PORT", 00:30:50.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.110 "hdgst": ${hdgst:-false}, 00:30:50.110 "ddgst": ${ddgst:-false} 00:30:50.110 }, 00:30:50.110 "method": "bdev_nvme_attach_controller" 00:30:50.110 } 00:30:50.110 EOF 00:30:50.110 )") 00:30:50.110 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:50.110 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:50.110 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:50.110 08:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:50.110 "params": { 00:30:50.110 "name": "Nvme1", 00:30:50.110 "trtype": "tcp", 00:30:50.110 "traddr": "10.0.0.2", 00:30:50.110 "adrfam": "ipv4", 00:30:50.110 "trsvcid": "4420", 00:30:50.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.110 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:50.110 "hdgst": false, 00:30:50.110 "ddgst": false 00:30:50.110 }, 00:30:50.110 "method": "bdev_nvme_attach_controller" 00:30:50.110 }' 00:30:50.110 [2024-11-27 08:13:44.030378] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:30:50.110 [2024-11-27 08:13:44.030424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2662913 ] 00:30:50.110 [2024-11-27 08:13:44.092459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.110 [2024-11-27 08:13:44.135232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.368 Running I/O for 10 seconds... 00:30:52.231 8238.00 IOPS, 64.36 MiB/s [2024-11-27T07:13:47.716Z] 8320.50 IOPS, 65.00 MiB/s [2024-11-27T07:13:48.647Z] 8346.33 IOPS, 65.21 MiB/s [2024-11-27T07:13:49.577Z] 8359.75 IOPS, 65.31 MiB/s [2024-11-27T07:13:50.507Z] 8366.20 IOPS, 65.36 MiB/s [2024-11-27T07:13:51.435Z] 8362.50 IOPS, 65.33 MiB/s [2024-11-27T07:13:52.369Z] 8360.43 IOPS, 65.32 MiB/s [2024-11-27T07:13:53.742Z] 8362.12 IOPS, 65.33 MiB/s [2024-11-27T07:13:54.675Z] 8359.44 IOPS, 65.31 MiB/s [2024-11-27T07:13:54.675Z] 8361.10 IOPS, 65.32 MiB/s 00:31:00.566 Latency(us) 00:31:00.566 [2024-11-27T07:13:54.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.566 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:00.566 Verification LBA range: start 0x0 length 0x1000 00:31:00.566 Nvme1n1 : 10.01 8365.45 65.36 0.00 0.00 15257.52 1852.10 22567.18 00:31:00.566 [2024-11-27T07:13:54.675Z] =================================================================================================================== 00:31:00.566 [2024-11-27T07:13:54.675Z] Total : 8365.45 65.36 0.00 0.00 15257.52 1852.10 22567.18 00:31:00.566 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2664551 00:31:00.566 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:00.566 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:00.566 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:00.566 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:00.566 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:00.566 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:00.566 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:00.566 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:00.566 { 00:31:00.566 "params": { 00:31:00.566 "name": "Nvme$subsystem", 00:31:00.566 "trtype": "$TEST_TRANSPORT", 00:31:00.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.567 "adrfam": "ipv4", 00:31:00.567 "trsvcid": "$NVMF_PORT", 00:31:00.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.567 "hdgst": ${hdgst:-false}, 00:31:00.567 "ddgst": ${ddgst:-false} 00:31:00.567 }, 00:31:00.567 "method": "bdev_nvme_attach_controller" 00:31:00.567 } 00:31:00.567 EOF 00:31:00.567 )") 00:31:00.567 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:00.567 [2024-11-27 08:13:54.534530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.567 [2024-11-27 08:13:54.534562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.567 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:00.567 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:00.567 08:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:00.567 "params": { 00:31:00.567 "name": "Nvme1", 00:31:00.567 "trtype": "tcp", 00:31:00.567 "traddr": "10.0.0.2", 00:31:00.567 "adrfam": "ipv4", 00:31:00.567 "trsvcid": "4420", 00:31:00.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:00.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:00.567 "hdgst": false, 00:31:00.567 "ddgst": false 00:31:00.567 }, 00:31:00.567 "method": "bdev_nvme_attach_controller" 00:31:00.567 }' 00:31:00.567 [2024-11-27 08:13:54.546490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.567 [2024-11-27 08:13:54.546504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.567 [2024-11-27 08:13:54.558492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.567 [2024-11-27 08:13:54.558503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.567 [2024-11-27 08:13:54.570489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.567 [2024-11-27 08:13:54.570500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.567 [2024-11-27 08:13:54.574563] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:31:00.567 [2024-11-27 08:13:54.574606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2664551 ] 00:31:00.567 [2024-11-27 08:13:54.582486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.567 [2024-11-27 08:13:54.582498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.567 [2024-11-27 08:13:54.594486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.567 [2024-11-27 08:13:54.594497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.567 [2024-11-27 08:13:54.606489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.567 [2024-11-27 08:13:54.606500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.567 [2024-11-27 08:13:54.618488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.567 [2024-11-27 08:13:54.618499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.567 [2024-11-27 08:13:54.630486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.567 [2024-11-27 08:13:54.630496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.567 [2024-11-27 08:13:54.636746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.567 [2024-11-27 08:13:54.642488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.567 [2024-11-27 08:13:54.642499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.567 [2024-11-27 08:13:54.654509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.567 [2024-11-27 08:13:54.654526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.567 [2024-11-27 08:13:54.666487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.567 [2024-11-27 08:13:54.666497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.678506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.678529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.679746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.825 [2024-11-27 08:13:54.690499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.690517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.702498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.702517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.714497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.714510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.726492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.726506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.738491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.738504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.750490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.750502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.762487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.762498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.774497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.774517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.786526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.786547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.798491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.798505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.810488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.810500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.822491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.822506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.834491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.834505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.846492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.846507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.858493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.858509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.870494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.870512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 Running I/O for 5 seconds... 00:31:00.825 [2024-11-27 08:13:54.884566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.884587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.899753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.899772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.915217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.915236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.825 [2024-11-27 08:13:54.931029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.825 [2024-11-27 08:13:54.931050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-11-27 08:13:54.946400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-11-27 08:13:54.946420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-11-27 08:13:54.960954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-11-27 08:13:54.960991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-11-27 08:13:54.976064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-11-27 08:13:54.976084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-11-27 08:13:54.990819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-11-27 08:13:54.990838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-11-27 08:13:55.007368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-11-27 08:13:55.007389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-11-27 08:13:55.022828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-11-27 08:13:55.022847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-11-27 08:13:55.035402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-11-27 08:13:55.035422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-11-27 08:13:55.050745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-11-27 08:13:55.050769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-11-27 08:13:55.062309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-11-27 08:13:55.062329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-11-27 08:13:55.076769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-11-27 08:13:55.076789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-11-27 08:13:55.092239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-11-27 08:13:55.092259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-11-27 08:13:55.107845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-11-27 08:13:55.107864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-11-27 08:13:55.123191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-11-27 08:13:55.123214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-11-27 08:13:55.138836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-11-27 08:13:55.138856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-11-27 08:13:55.154581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-11-27 08:13:55.154601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-11-27 08:13:55.166623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-11-27 08:13:55.166642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-11-27 08:13:55.181049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-11-27 08:13:55.181069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.340 [2024-11-27 08:13:55.196391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.340 [2024-11-27 08:13:55.196412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.340 [2024-11-27 08:13:55.211480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.340 [2024-11-27 08:13:55.211500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.340 [2024-11-27 08:13:55.226926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.340 [2024-11-27 08:13:55.226945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.340 [2024-11-27 08:13:55.242720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.340 [2024-11-27 08:13:55.242741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.340 [2024-11-27 08:13:55.255262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.340 [2024-11-27 08:13:55.255287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.340 [2024-11-27 08:13:55.270622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.340 [2024-11-27 08:13:55.270641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.340 [2024-11-27 08:13:55.282106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.340 [2024-11-27 08:13:55.282126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.340 [2024-11-27 08:13:55.296827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.340 [2024-11-27 08:13:55.296846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.340 [2024-11-27 08:13:55.311855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.340 [2024-11-27 08:13:55.311874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.340 [2024-11-27 08:13:55.326727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.340 [2024-11-27 08:13:55.326750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.341 [2024-11-27 08:13:55.338559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.341 [2024-11-27 08:13:55.338578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.341 [2024-11-27 08:13:55.352412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.341 [2024-11-27 08:13:55.352432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.341 [2024-11-27 08:13:55.367649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.341 [2024-11-27 08:13:55.367668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.341 [2024-11-27 08:13:55.382434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.341 [2024-11-27 08:13:55.382453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.341 [2024-11-27 08:13:55.396586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.341 [2024-11-27 08:13:55.396606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.341 [2024-11-27 08:13:55.411649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.341 [2024-11-27 08:13:55.411668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.341 [2024-11-27 08:13:55.426561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.341 [2024-11-27 08:13:55.426580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.341 [2024-11-27 08:13:55.439535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.341 [2024-11-27 08:13:55.439555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.598 [2024-11-27 08:13:55.450499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.598 [2024-11-27 08:13:55.450522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.598 [2024-11-27 08:13:55.464256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.598 [2024-11-27 08:13:55.464276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.598 [2024-11-27 08:13:55.479685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.598 [2024-11-27 08:13:55.479709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.598 [2024-11-27 08:13:55.494425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.598 [2024-11-27 08:13:55.494444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.598 [2024-11-27 08:13:55.507917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.599 [2024-11-27 08:13:55.507936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.599 [2024-11-27 08:13:55.523035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.599 [2024-11-27 08:13:55.523054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.599 [2024-11-27 08:13:55.538808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.599 [2024-11-27 08:13:55.538827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.599 [2024-11-27 08:13:55.554398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.599 [2024-11-27 08:13:55.554418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.599 [2024-11-27 08:13:55.565733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.599 [2024-11-27 08:13:55.565751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.599 [2024-11-27 08:13:55.580243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.599 [2024-11-27 08:13:55.580263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.599 [2024-11-27 08:13:55.595236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.599 [2024-11-27 08:13:55.595255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.599 [2024-11-27 08:13:55.610486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.599 [2024-11-27 08:13:55.610506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.599 [2024-11-27 08:13:55.623695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.599 [2024-11-27 08:13:55.623714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.599 [2024-11-27 08:13:55.639184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.599 [2024-11-27 08:13:55.639203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.599 [2024-11-27 08:13:55.654326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.599 [2024-11-27 08:13:55.654346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.599 [2024-11-27 08:13:55.668304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.599 [2024-11-27 08:13:55.668323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.599 [2024-11-27 08:13:55.683390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.599 [2024-11-27 08:13:55.683408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.599 [2024-11-27 08:13:55.698395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.599 [2024-11-27 08:13:55.698414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.856 [2024-11-27 08:13:55.710132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.710151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.857 [2024-11-27 08:13:55.724028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.724046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.857 [2024-11-27 08:13:55.739384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.739402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.857 [2024-11-27 08:13:55.754076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.754096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.857 [2024-11-27 08:13:55.767230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.767248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.857 [2024-11-27 08:13:55.780043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.780063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.857 [2024-11-27 08:13:55.795191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.795211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.857 [2024-11-27 08:13:55.810553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.810572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.857 [2024-11-27 08:13:55.821795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.821814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.857 [2024-11-27 08:13:55.836426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.836445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.857 [2024-11-27 08:13:55.851484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.851502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.857 [2024-11-27 08:13:55.866573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.866594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.857 16223.00 IOPS, 126.74 MiB/s [2024-11-27T07:13:55.966Z] [2024-11-27 08:13:55.880611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.880631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.857 [2024-11-27 08:13:55.895754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.895774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.857 [2024-11-27 08:13:55.910724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.910744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.857 [2024-11-27 08:13:55.923726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.923747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.857 [2024-11-27 08:13:55.935019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.935038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.857 [2024-11-27 08:13:55.950731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.950750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.857 [2024-11-27 08:13:55.964605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.857 [2024-11-27 08:13:55.964626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.116 [2024-11-27 08:13:55.979989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.116 [2024-11-27 08:13:55.980015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.116 [2024-11-27 08:13:55.995172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.116 [2024-11-27 08:13:55.995191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.116 [2024-11-27 08:13:56.010503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.116 [2024-11-27 08:13:56.010522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.116 [2024-11-27 08:13:56.024342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.116 [2024-11-27 08:13:56.024363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.116 [2024-11-27 08:13:56.040149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.116 [2024-11-27 08:13:56.040169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.116 [2024-11-27 08:13:56.055305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.116 [2024-11-27 08:13:56.055324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.116 [2024-11-27 08:13:56.070808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.116 [2024-11-27 08:13:56.070827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.116 [2024-11-27 08:13:56.083259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.116 [2024-11-27 08:13:56.083279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.116 [2024-11-27 08:13:56.098431] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.116 [2024-11-27 08:13:56.098452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.116 [2024-11-27 08:13:56.109148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.116 [2024-11-27 08:13:56.109168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.116 [2024-11-27 08:13:56.124679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.116 [2024-11-27 08:13:56.124700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.116 [2024-11-27 08:13:56.140122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.116 [2024-11-27 08:13:56.140141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.116 [2024-11-27 08:13:56.155335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.116 [2024-11-27 08:13:56.155354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.116 [2024-11-27 08:13:56.170743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.116 [2024-11-27 08:13:56.170763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.116 [2024-11-27 08:13:56.183193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.116 [2024-11-27 08:13:56.183212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.116 [2024-11-27 08:13:56.198410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.116 [2024-11-27 08:13:56.198431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.116 [2024-11-27 08:13:56.209612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.116 [2024-11-27 08:13:56.209631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.375 [2024-11-27 08:13:56.225076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.375 [2024-11-27 08:13:56.225096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.375 [2024-11-27 08:13:56.239889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.375 [2024-11-27 08:13:56.239909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.375 [2024-11-27 08:13:56.255171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.375 [2024-11-27 08:13:56.255190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.375 [2024-11-27 08:13:56.270944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.375 [2024-11-27 08:13:56.270970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.375 [2024-11-27 08:13:56.283393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.375 [2024-11-27 08:13:56.283412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.375 [2024-11-27 08:13:56.294752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.375 [2024-11-27 08:13:56.294771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.375 [2024-11-27 08:13:56.310934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.375 [2024-11-27 08:13:56.310962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.375 [2024-11-27 08:13:56.326866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.375 [2024-11-27 08:13:56.326885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.375 [2024-11-27 08:13:56.343050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.375 [2024-11-27 08:13:56.343068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.375 [2024-11-27 08:13:56.358603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.375 [2024-11-27 08:13:56.358623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.375 [2024-11-27 08:13:56.370185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.375 [2024-11-27 08:13:56.370205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.375 [2024-11-27 08:13:56.385009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.375 [2024-11-27 08:13:56.385028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.375 [2024-11-27 08:13:56.399896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.375 [2024-11-27 08:13:56.399920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.375 [2024-11-27 08:13:56.415212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.375 [2024-11-27 08:13:56.415232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.375 [2024-11-27 08:13:56.430505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.375 [2024-11-27 08:13:56.430525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.375 [2024-11-27 08:13:56.444488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.375 [2024-11-27 08:13:56.444508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.375 [2024-11-27 08:13:56.459647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.375 [2024-11-27 08:13:56.459666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.375 [2024-11-27 08:13:56.474592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.375 [2024-11-27 08:13:56.474615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.633 [2024-11-27 08:13:56.486265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.633 [2024-11-27 08:13:56.486286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.633 [2024-11-27 08:13:56.501074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.633 [2024-11-27 08:13:56.501094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.633 [2024-11-27 08:13:56.516430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.633 [2024-11-27 08:13:56.516449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.633 [2024-11-27 08:13:56.531136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.633 [2024-11-27 08:13:56.531156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.633 [2024-11-27 08:13:56.546750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.633 [2024-11-27 08:13:56.546769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.633 [2024-11-27 08:13:56.559262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.633 [2024-11-27 08:13:56.559282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.633 [2024-11-27 08:13:56.572080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.633 [2024-11-27 08:13:56.572099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.633 [2024-11-27 08:13:56.587073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.634 [2024-11-27 08:13:56.587091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.634 [2024-11-27 08:13:56.602732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.634 [2024-11-27 08:13:56.602751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.634 [2024-11-27 08:13:56.615390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.634 [2024-11-27 08:13:56.615410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.634 [2024-11-27 08:13:56.630441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.634 [2024-11-27 08:13:56.630460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.634 [2024-11-27 08:13:56.643542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.634 [2024-11-27 08:13:56.643561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.634 [2024-11-27 08:13:56.658877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.634 [2024-11-27 08:13:56.658895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.634 [2024-11-27 08:13:56.676013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.634 [2024-11-27 08:13:56.676036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.634 [2024-11-27 08:13:56.691248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.634 [2024-11-27 08:13:56.691267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.634 [2024-11-27 08:13:56.706445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.634 [2024-11-27 08:13:56.706464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.634 [2024-11-27 08:13:56.720484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.634 [2024-11-27 08:13:56.720503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.634 [2024-11-27 08:13:56.735657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.634 [2024-11-27 08:13:56.735676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.892 [2024-11-27 08:13:56.750635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.892 [2024-11-27 08:13:56.750655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.892 [2024-11-27 08:13:56.764820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.892 [2024-11-27 08:13:56.764840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.892 [2024-11-27 08:13:56.780762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.892 [2024-11-27 08:13:56.780781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.892 [2024-11-27 08:13:56.796135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.892 [2024-11-27 08:13:56.796154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.892 [2024-11-27 08:13:56.811404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.892 [2024-11-27 08:13:56.811423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.892 [2024-11-27 08:13:56.826360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.892 [2024-11-27 08:13:56.826378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.892 [2024-11-27 08:13:56.838100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.892 [2024-11-27 08:13:56.838128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.892 [2024-11-27 08:13:56.852619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.892 [2024-11-27 08:13:56.852638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.892 [2024-11-27 08:13:56.867854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.892 [2024-11-27 08:13:56.867874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.892 16232.50 IOPS, 126.82 MiB/s [2024-11-27T07:13:57.001Z] [2024-11-27 08:13:56.882613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.892 [2024-11-27 08:13:56.882632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.892 [2024-11-27 08:13:56.895489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.892 [2024-11-27 08:13:56.895509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.892 [2024-11-27 08:13:56.910972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.892 [2024-11-27 08:13:56.910991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.892 [2024-11-27 08:13:56.926613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.892 [2024-11-27 08:13:56.926633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.892 [2024-11-27 08:13:56.940493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.892 [2024-11-27 08:13:56.940512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.892 [2024-11-27 08:13:56.955630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.892 [2024-11-27 08:13:56.955653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.892 [2024-11-27 08:13:56.970741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.892 [2024-11-27 08:13:56.970761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.892 [2024-11-27 08:13:56.981020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.892 [2024-11-27 08:13:56.981039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.892 [2024-11-27 08:13:56.996263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.892 [2024-11-27 08:13:56.996283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.150 [2024-11-27 08:13:57.011851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.150 [2024-11-27 08:13:57.011872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.150 [2024-11-27 08:13:57.026963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.150 [2024-11-27 08:13:57.026983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.150 [2024-11-27 08:13:57.042656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.150 [2024-11-27 08:13:57.042677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.150 [2024-11-27 08:13:57.055313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.150 [2024-11-27 08:13:57.055333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.150 [2024-11-27 08:13:57.070998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.150 [2024-11-27 08:13:57.071018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.150 [2024-11-27 08:13:57.086662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.150 [2024-11-27 08:13:57.086681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.150 [2024-11-27 08:13:57.098507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.150 [2024-11-27 08:13:57.098526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.150 [2024-11-27 08:13:57.112764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.150 [2024-11-27 08:13:57.112783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.150 [2024-11-27 08:13:57.128122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.150 [2024-11-27 08:13:57.128142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.150 [2024-11-27 08:13:57.143259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.150 [2024-11-27 08:13:57.143278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.150 [2024-11-27 08:13:57.158752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.150 [2024-11-27 08:13:57.158771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.150 [2024-11-27 08:13:57.170639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.150 [2024-11-27 08:13:57.170658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.150 [2024-11-27 08:13:57.184605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.150 [2024-11-27 08:13:57.184624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.151 [2024-11-27 08:13:57.200338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.151 [2024-11-27 08:13:57.200357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.151 [2024-11-27 08:13:57.215741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.151 [2024-11-27 08:13:57.215760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.151 [2024-11-27 08:13:57.231199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.151 [2024-11-27 08:13:57.231229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.151 [2024-11-27 08:13:57.247178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.151 [2024-11-27 08:13:57.247198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.409 [2024-11-27 08:13:57.263219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.409 [2024-11-27 08:13:57.263239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.409 [2024-11-27 08:13:57.279269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.409 [2024-11-27 08:13:57.279288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.409 [2024-11-27 08:13:57.294669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.409 [2024-11-27 08:13:57.294689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.409 [2024-11-27 08:13:57.307200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.409 [2024-11-27 08:13:57.307220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.409 [2024-11-27 08:13:57.320298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.409 [2024-11-27 08:13:57.320317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.409 [2024-11-27 08:13:57.335477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.409 [2024-11-27 08:13:57.335497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.409 [2024-11-27 08:13:57.350766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.409 [2024-11-27 08:13:57.350787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.409 [2024-11-27 08:13:57.363466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.409 [2024-11-27 08:13:57.363486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.409 [2024-11-27 08:13:57.379466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.409 [2024-11-27 08:13:57.379486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.409 [2024-11-27 08:13:57.394387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.409 [2024-11-27 08:13:57.394408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.409 [2024-11-27 08:13:57.405812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.409 [2024-11-27 08:13:57.405833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.409 [2024-11-27 08:13:57.420051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.409 [2024-11-27 08:13:57.420070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.409 [2024-11-27 08:13:57.435340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.409 [2024-11-27 08:13:57.435359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.409 [2024-11-27 08:13:57.450224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.409 [2024-11-27 08:13:57.450244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.409 [2024-11-27 08:13:57.462693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.409 [2024-11-27 08:13:57.462712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.409 [2024-11-27 08:13:57.476505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.409 [2024-11-27 08:13:57.476526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.409 [2024-11-27 08:13:57.491968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.409 [2024-11-27 08:13:57.491987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.409 [2024-11-27 08:13:57.507187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.409 [2024-11-27 08:13:57.507205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.667 [2024-11-27 08:13:57.522794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.667 [2024-11-27 08:13:57.522814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.667 [2024-11-27 08:13:57.538917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.667 [2024-11-27 08:13:57.538936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.667 [2024-11-27 08:13:57.554925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.667 [2024-11-27 08:13:57.554945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.667 [2024-11-27 08:13:57.567556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.667 [2024-11-27 08:13:57.567575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.667 [2024-11-27 08:13:57.582901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.667 [2024-11-27 08:13:57.582920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.667 [2024-11-27 08:13:57.598831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.667 [2024-11-27 08:13:57.598849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.667 [2024-11-27 08:13:57.612107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.667 [2024-11-27 08:13:57.612126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.667 [2024-11-27 08:13:57.627760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.667 [2024-11-27 08:13:57.627779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.667 [2024-11-27 08:13:57.643221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.667 [2024-11-27 08:13:57.643240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.667 [2024-11-27 08:13:57.658554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.667 [2024-11-27 08:13:57.658574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.667 [2024-11-27 08:13:57.671509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.667 [2024-11-27 08:13:57.671528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.667 [2024-11-27 08:13:57.687220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.667 [2024-11-27 08:13:57.687240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.667 [2024-11-27 08:13:57.702859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.667 [2024-11-27 08:13:57.702879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.667 [2024-11-27 08:13:57.718267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.667 [2024-11-27 08:13:57.718287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.667 [2024-11-27 08:13:57.731444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.667 [2024-11-27 08:13:57.731463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.667 [2024-11-27 08:13:57.747005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.667 [2024-11-27 08:13:57.747024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.667 [2024-11-27 08:13:57.759751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.668 [2024-11-27 08:13:57.759771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.668 [2024-11-27 08:13:57.770473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.668 [2024-11-27 08:13:57.770492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.925 [2024-11-27 08:13:57.783869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.925 [2024-11-27 08:13:57.783889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.925 [2024-11-27 08:13:57.799488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.926 [2024-11-27 08:13:57.799507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.926 [2024-11-27 08:13:57.814430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.926 [2024-11-27 08:13:57.814449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.926 [2024-11-27 08:13:57.827515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.926 [2024-11-27 08:13:57.827534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.926 [2024-11-27 08:13:57.843174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.926 [2024-11-27 08:13:57.843193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.926 [2024-11-27 08:13:57.858509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.926 [2024-11-27 08:13:57.858529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.926 [2024-11-27 08:13:57.871572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.926 [2024-11-27 08:13:57.871590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.926 [2024-11-27 08:13:57.883003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.926 [2024-11-27 08:13:57.883021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.926 16207.67 IOPS, 126.62 MiB/s [2024-11-27T07:13:58.035Z] [2024-11-27 08:13:57.896633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.926 [2024-11-27 08:13:57.896651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.926 [2024-11-27 08:13:57.911965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.926 [2024-11-27 08:13:57.911984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.926 [2024-11-27 08:13:57.927138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.926 [2024-11-27 08:13:57.927156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.926 [2024-11-27 08:13:57.942691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.926 [2024-11-27 08:13:57.942711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.926 [2024-11-27 08:13:57.956463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.926 [2024-11-27 08:13:57.956482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.926 [2024-11-27 08:13:57.971715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.926 [2024-11-27 08:13:57.971734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.926 [2024-11-27 08:13:57.986360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.926 [2024-11-27 08:13:57.986380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.926 [2024-11-27 08:13:57.999473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.926 [2024-11-27 08:13:57.999492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.926 [2024-11-27 08:13:58.014535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.926 [2024-11-27 08:13:58.014554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.926 [2024-11-27 08:13:58.027003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.926 [2024-11-27 08:13:58.027022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.184 [2024-11-27 08:13:58.042515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.184 [2024-11-27 08:13:58.042540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.184 [2024-11-27 08:13:58.055092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.184 [2024-11-27 08:13:58.055111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.184 [2024-11-27 08:13:58.068027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.184 [2024-11-27 08:13:58.068046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.184 [2024-11-27 08:13:58.083211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.184 [2024-11-27 08:13:58.083231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.184 [2024-11-27 08:13:58.098435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.184 [2024-11-27 08:13:58.098455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.184 [2024-11-27 08:13:58.112070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.184 [2024-11-27 08:13:58.112089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.184 [2024-11-27 08:13:58.127200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.184 [2024-11-27 08:13:58.127219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.184 [2024-11-27 08:13:58.142543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.184 [2024-11-27 08:13:58.142563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.184 [2024-11-27 08:13:58.153821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.184 [2024-11-27 08:13:58.153840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.184 [2024-11-27 08:13:58.168282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.184 [2024-11-27 08:13:58.168301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.184 [2024-11-27 08:13:58.183732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.184 [2024-11-27 08:13:58.183751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.184 [2024-11-27 08:13:58.198705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.184 [2024-11-27 08:13:58.198724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.184 [2024-11-27 08:13:58.209745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.184 [2024-11-27 08:13:58.209764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.184 [2024-11-27 08:13:58.224747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.184 [2024-11-27 08:13:58.224766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.184 [2024-11-27 08:13:58.239425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.184 [2024-11-27 08:13:58.239443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.184 [2024-11-27 08:13:58.254557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.184 [2024-11-27 08:13:58.254576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.184 [2024-11-27 08:13:58.267629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.184 [2024-11-27 08:13:58.267648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.184 [2024-11-27 08:13:58.283188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.184 [2024-11-27 08:13:58.283207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.443 [2024-11-27 08:13:58.298083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.443 [2024-11-27 08:13:58.298103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.443 [2024-11-27 08:13:58.311858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.443 [2024-11-27 08:13:58.311882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.443 [2024-11-27 08:13:58.321821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.443 [2024-11-27 08:13:58.321841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.443 [2024-11-27 08:13:58.336310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.443 [2024-11-27 08:13:58.336330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.443 [2024-11-27 08:13:58.351407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.443 [2024-11-27 08:13:58.351426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.443 [2024-11-27 08:13:58.366700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.443 [2024-11-27 08:13:58.366720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.443 [2024-11-27 08:13:58.379310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.443 [2024-11-27 08:13:58.379328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.443 [2024-11-27 08:13:58.394914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.443 [2024-11-27 08:13:58.394932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.443 [2024-11-27 08:13:58.410415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.443 [2024-11-27 08:13:58.410436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.443 [2024-11-27 08:13:58.424476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.443 [2024-11-27 08:13:58.424495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.443 [2024-11-27 08:13:58.439562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.443 [2024-11-27 08:13:58.439581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.443 [2024-11-27 08:13:58.454675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.443 [2024-11-27 08:13:58.454694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.443 [2024-11-27 08:13:58.465341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.443 [2024-11-27 08:13:58.465360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.443 [2024-11-27 08:13:58.480500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.443 [2024-11-27 08:13:58.480519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.443 [2024-11-27 08:13:58.495807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.443 [2024-11-27 08:13:58.495826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.443 [2024-11-27 08:13:58.510811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.443 [2024-11-27 08:13:58.510830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.443 [2024-11-27 08:13:58.526850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.443 [2024-11-27 08:13:58.526869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.443 [2024-11-27 08:13:58.540479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.443 [2024-11-27 08:13:58.540499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.555669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.555689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.570518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.570538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.584419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.584443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.599495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.599514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.613718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.613738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.627659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.627678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.639069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.639088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.654243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.654262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.667379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.667398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.682605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.682624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.693524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.693544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.708965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.709001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.724273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.724292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.739584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.739603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.751111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.751130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.764135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.764155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.779328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.779348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.794224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.794244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.702 [2024-11-27 08:13:58.807464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.702 [2024-11-27 08:13:58.807484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.961 [2024-11-27 08:13:58.819355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.961 [2024-11-27 08:13:58.819375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.961 [2024-11-27 08:13:58.834321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.961 [2024-11-27 08:13:58.834342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.961 [2024-11-27 08:13:58.848897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.961 [2024-11-27 08:13:58.848922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.961 [2024-11-27 08:13:58.864360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.961 [2024-11-27 08:13:58.864379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.961 [2024-11-27 08:13:58.879450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.961 [2024-11-27 08:13:58.879470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.961 16252.75 IOPS, 126.97 MiB/s [2024-11-27T07:13:59.070Z] [2024-11-27 08:13:58.894866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.961 [2024-11-27 08:13:58.894885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.961 [2024-11-27 08:13:58.910879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.961 [2024-11-27 08:13:58.910899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.961 [2024-11-27 08:13:58.923596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.961 [2024-11-27 08:13:58.923615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.961 [2024-11-27 08:13:58.939029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.961 [2024-11-27 08:13:58.939048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.961 [2024-11-27 08:13:58.954212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.961 [2024-11-27 08:13:58.954232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.961 [2024-11-27 08:13:58.967279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.961 [2024-11-27 08:13:58.967298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.961 [2024-11-27 08:13:58.982851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.961 [2024-11-27 08:13:58.982870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.961 [2024-11-27 08:13:58.999413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.961 [2024-11-27 08:13:58.999438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.961 [2024-11-27 08:13:59.014619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.961 [2024-11-27 08:13:59.014639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.961 [2024-11-27 08:13:59.026180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.961 [2024-11-27 08:13:59.026200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.961 [2024-11-27 08:13:59.040886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.961 [2024-11-27 08:13:59.040906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.961 [2024-11-27 08:13:59.056299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.961 [2024-11-27 08:13:59.056318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.219 [2024-11-27 08:13:59.071897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.219 [2024-11-27 08:13:59.071917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.219 [2024-11-27 08:13:59.087078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.219 [2024-11-27 08:13:59.087097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.219 [2024-11-27 08:13:59.098826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.219 [2024-11-27 08:13:59.098845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.219 [2024-11-27 08:13:59.112537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.219 [2024-11-27 08:13:59.112557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.219 [2024-11-27 08:13:59.127828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.219 [2024-11-27 08:13:59.127849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.219 [2024-11-27 08:13:59.143425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.219 [2024-11-27 08:13:59.143445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.219 [2024-11-27 08:13:59.158904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.219 [2024-11-27 08:13:59.158923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.219 [2024-11-27 08:13:59.175621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.219 [2024-11-27 08:13:59.175641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.219 [2024-11-27 08:13:59.191130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.219 [2024-11-27 08:13:59.191149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.219 [2024-11-27 08:13:59.206383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.219 [2024-11-27 08:13:59.206403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.219 [2024-11-27 08:13:59.220550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.219 [2024-11-27 08:13:59.220570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.219 [2024-11-27 08:13:59.236022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.219 [2024-11-27 08:13:59.236042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.219 [2024-11-27 08:13:59.251539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.219 [2024-11-27 08:13:59.251558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.219 [2024-11-27 08:13:59.266387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.219 [2024-11-27 08:13:59.266406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.219 [2024-11-27 08:13:59.277110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.219 [2024-11-27 08:13:59.277128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.219 [2024-11-27 08:13:59.292748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.219 [2024-11-27 08:13:59.292767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.219 [2024-11-27 08:13:59.307642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.219 [2024-11-27 08:13:59.307660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.219 [2024-11-27 08:13:59.322951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.219 [2024-11-27 08:13:59.322970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.476 [2024-11-27 08:13:59.338831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.476 [2024-11-27 08:13:59.338851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.476 [2024-11-27 08:13:59.354378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.476 [2024-11-27 08:13:59.354397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.476 [2024-11-27 08:13:59.368658] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.476 [2024-11-27 08:13:59.368677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.476 [2024-11-27 08:13:59.384380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.476 [2024-11-27 08:13:59.384399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.476 [2024-11-27 08:13:59.399348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.476 [2024-11-27 08:13:59.399367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.476 [2024-11-27 08:13:59.414498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.476 [2024-11-27 08:13:59.414517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.476 [2024-11-27 08:13:59.428303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.476 [2024-11-27 08:13:59.428322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.476 [2024-11-27 08:13:59.443579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.476 [2024-11-27 08:13:59.443598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.476 [2024-11-27 08:13:59.459618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.476 [2024-11-27 08:13:59.459638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.476 [2024-11-27 08:13:59.474824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.476 [2024-11-27 08:13:59.474843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.476 [2024-11-27 08:13:59.490689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.476 [2024-11-27 08:13:59.490709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.476 [2024-11-27 08:13:59.502588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.476 [2024-11-27 08:13:59.502607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.476 [2024-11-27 08:13:59.516182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.476 [2024-11-27 08:13:59.516201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.476 [2024-11-27 08:13:59.531595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.476 [2024-11-27 08:13:59.531613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.477 [2024-11-27 08:13:59.547089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.477 [2024-11-27 08:13:59.547109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.477 [2024-11-27 08:13:59.558891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.477 [2024-11-27 08:13:59.558910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.477 [2024-11-27 08:13:59.572477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.477 [2024-11-27 08:13:59.572496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.734 [2024-11-27 08:13:59.588166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.734 [2024-11-27 08:13:59.588186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.734 [2024-11-27 08:13:59.603707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.734 [2024-11-27 08:13:59.603728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.734 [2024-11-27 08:13:59.618494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.734 [2024-11-27 08:13:59.618514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.734 [2024-11-27 08:13:59.630132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.734 [2024-11-27 08:13:59.630153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.734 [2024-11-27 08:13:59.644315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.734 [2024-11-27 08:13:59.644334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.734 [2024-11-27 08:13:59.659427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.734 [2024-11-27 08:13:59.659446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.734 [2024-11-27 08:13:59.674907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.734 [2024-11-27 08:13:59.674930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.734 [2024-11-27 08:13:59.690975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.734 [2024-11-27 08:13:59.690994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.734 [2024-11-27 08:13:59.707059] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.734 [2024-11-27 08:13:59.707078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.734 [2024-11-27 08:13:59.722062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.734 [2024-11-27 08:13:59.722082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.734 [2024-11-27 08:13:59.736510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.734 [2024-11-27 08:13:59.736530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.734 [2024-11-27 08:13:59.751375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.734 [2024-11-27 08:13:59.751395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.734 [2024-11-27 08:13:59.766490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.734 [2024-11-27 08:13:59.766510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.734 [2024-11-27 08:13:59.779047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.734 [2024-11-27 08:13:59.779066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.734 [2024-11-27 08:13:59.792394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.734 [2024-11-27 08:13:59.792413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.734 [2024-11-27 08:13:59.808063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.734 [2024-11-27 08:13:59.808083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.734 [2024-11-27 08:13:59.822925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.734 [2024-11-27 08:13:59.822944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.734 [2024-11-27 08:13:59.838227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.734 [2024-11-27 08:13:59.838247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 [2024-11-27 08:13:59.851381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.993 [2024-11-27 08:13:59.851400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 [2024-11-27 08:13:59.867092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.993 [2024-11-27 08:13:59.867111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 [2024-11-27 08:13:59.882347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.993 [2024-11-27 08:13:59.882367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 16229.80 IOPS, 126.80 MiB/s [2024-11-27T07:14:00.102Z] [2024-11-27 08:13:59.894228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.993 [2024-11-27 08:13:59.894247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 00:31:05.993 Latency(us) 00:31:05.993 [2024-11-27T07:14:00.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.993 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:05.993 Nvme1n1 : 5.01 16231.39 126.81 0.00 0.00 7877.35 2222.53 13278.16 00:31:05.993 [2024-11-27T07:14:00.102Z] =================================================================================================================== 00:31:05.993 [2024-11-27T07:14:00.102Z] Total : 16231.39 126.81 0.00 0.00 7877.35 2222.53 13278.16 00:31:05.993 [2024-11-27 08:13:59.902495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.993 [2024-11-27 08:13:59.902519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 [2024-11-27 08:13:59.914492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.993 [2024-11-27 08:13:59.914508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 [2024-11-27 08:13:59.926506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.993 [2024-11-27 08:13:59.926522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 [2024-11-27 08:13:59.938496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.993 [2024-11-27 08:13:59.938512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 [2024-11-27 08:13:59.950495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.993 [2024-11-27 08:13:59.950507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 [2024-11-27 08:13:59.962491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.993 [2024-11-27 08:13:59.962503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 [2024-11-27 08:13:59.974491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.993 [2024-11-27 08:13:59.974504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 [2024-11-27 08:13:59.986489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.993 [2024-11-27 08:13:59.986502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 [2024-11-27 08:13:59.998491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.993 [2024-11-27 08:13:59.998506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 [2024-11-27 08:14:00.010501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.993 [2024-11-27 08:14:00.010521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 [2024-11-27 08:14:00.022488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.993 [2024-11-27 08:14:00.022500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 [2024-11-27 08:14:00.034507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.993 [2024-11-27 08:14:00.034530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 [2024-11-27 08:14:00.046490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.993 [2024-11-27 08:14:00.046502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 [2024-11-27 08:14:00.058487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.993 [2024-11-27 08:14:00.058497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2664551) - No such process 00:31:05.993 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2664551 00:31:05.993 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.993 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.993 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.993 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.993 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:05.993 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.993 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.993 delay0 00:31:05.993 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.993 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:05.993 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.993 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.993 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.993 08:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:06.252 [2024-11-27 08:14:00.133982] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:12.811 Initializing NVMe Controllers 00:31:12.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:12.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:12.811 Initialization complete. Launching workers. 00:31:12.811 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 387 00:31:12.811 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 659, failed to submit 48 00:31:12.811 success 547, unsuccessful 112, failed 0 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:12.811 rmmod nvme_tcp 00:31:12.811 rmmod nvme_fabrics 00:31:12.811 rmmod nvme_keyring 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2662885 ']' 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2662885 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2662885 ']' 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2662885 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2662885 00:31:12.811 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:12.812 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:12.812 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2662885' 00:31:12.812 killing process with pid 2662885 00:31:12.812 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2662885 00:31:12.812 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2662885 00:31:12.812 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:12.812 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:12.812 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:12.812 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:12.812 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:12.812 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:12.812 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:12.812 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:12.812 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:12.812 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.812 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.812 08:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.715 08:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:14.715 00:31:14.715 real 0m30.781s 00:31:14.715 user 0m40.423s 00:31:14.715 sys 0m11.816s 00:31:14.715 08:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:14.715 08:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:14.715 ************************************ 00:31:14.715 END TEST nvmf_zcopy 00:31:14.715 ************************************ 00:31:14.715 08:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:14.715 08:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:14.715 08:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:14.715 08:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:14.973 ************************************ 00:31:14.973 START TEST nvmf_nmic 00:31:14.973 ************************************ 00:31:14.973 08:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:14.973 * Looking for test storage... 00:31:14.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:14.973 08:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:14.973 08:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:31:14.973 08:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:14.973 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:14.973 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:14.973 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:14.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.974 --rc genhtml_branch_coverage=1 00:31:14.974 --rc genhtml_function_coverage=1 00:31:14.974 --rc genhtml_legend=1 00:31:14.974 --rc geninfo_all_blocks=1 00:31:14.974 --rc geninfo_unexecuted_blocks=1 00:31:14.974 00:31:14.974 ' 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:14.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.974 --rc genhtml_branch_coverage=1 00:31:14.974 --rc genhtml_function_coverage=1 00:31:14.974 --rc genhtml_legend=1 00:31:14.974 --rc geninfo_all_blocks=1 00:31:14.974 --rc geninfo_unexecuted_blocks=1 00:31:14.974 00:31:14.974 ' 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:14.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.974 --rc genhtml_branch_coverage=1 00:31:14.974 --rc genhtml_function_coverage=1 00:31:14.974 --rc genhtml_legend=1 00:31:14.974 --rc geninfo_all_blocks=1 00:31:14.974 --rc geninfo_unexecuted_blocks=1 00:31:14.974 00:31:14.974 ' 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:14.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.974 --rc genhtml_branch_coverage=1 00:31:14.974 --rc genhtml_function_coverage=1 00:31:14.974 --rc genhtml_legend=1 00:31:14.974 --rc geninfo_all_blocks=1 00:31:14.974 --rc geninfo_unexecuted_blocks=1 00:31:14.974 00:31:14.974 ' 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.974 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:14.975 08:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:20.234 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:20.234 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.234 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:20.235 Found net devices under 0000:86:00.0: cvl_0_0 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:20.235 Found net devices under 0000:86:00.1: cvl_0_1 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.235 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.492 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.492 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.492 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:20.492 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.492 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.492 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.492 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:20.492 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:20.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:31:20.492 00:31:20.492 --- 10.0.0.2 ping statistics --- 00:31:20.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.492 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:31:20.492 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:31:20.492 00:31:20.492 --- 10.0.0.1 ping statistics --- 00:31:20.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.492 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:31:20.492 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.492 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2669866 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2669866 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2669866 ']' 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:20.493 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:20.750 [2024-11-27 08:14:14.613983] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:20.750 [2024-11-27 08:14:14.614962] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:31:20.750 [2024-11-27 08:14:14.614999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.750 [2024-11-27 08:14:14.683435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:20.750 [2024-11-27 08:14:14.729518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.750 [2024-11-27 08:14:14.729555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.750 [2024-11-27 08:14:14.729562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.750 [2024-11-27 08:14:14.729569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.750 [2024-11-27 08:14:14.729575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.750 [2024-11-27 08:14:14.730988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.750 [2024-11-27 08:14:14.731088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:20.750 [2024-11-27 08:14:14.731288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:20.750 [2024-11-27 08:14:14.731291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.751 [2024-11-27 08:14:14.801103] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:20.751 [2024-11-27 08:14:14.801221] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:20.751 [2024-11-27 08:14:14.801387] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:20.751 [2024-11-27 08:14:14.801703] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:20.751 [2024-11-27 08:14:14.801884] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:20.751 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:20.751 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:20.751 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:20.751 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:20.751 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.009 [2024-11-27 08:14:14.872021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.009 Malloc0 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.009 [2024-11-27 08:14:14.943937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:21.009 test case1: single bdev can't be used in multiple subsystems 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.009 [2024-11-27 08:14:14.975692] bdev.c:8507:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:21.009 [2024-11-27 08:14:14.975712] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:21.009 [2024-11-27 08:14:14.975719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.009 request: 00:31:21.009 { 00:31:21.009 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:21.009 "namespace": { 00:31:21.009 "bdev_name": "Malloc0", 00:31:21.009 "no_auto_visible": false, 00:31:21.009 "hide_metadata": false 00:31:21.009 }, 00:31:21.009 "method": "nvmf_subsystem_add_ns", 00:31:21.009 "req_id": 1 00:31:21.009 } 00:31:21.009 Got JSON-RPC error response 00:31:21.009 response: 00:31:21.009 { 00:31:21.009 "code": -32602, 00:31:21.009 "message": "Invalid parameters" 00:31:21.009 } 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:21.009 Adding namespace failed - expected result. 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:21.009 test case2: host connect to nvmf target in multiple paths 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.009 [2024-11-27 08:14:14.987785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.009 08:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:21.268 08:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:21.525 08:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:21.526 08:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:21.526 08:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:21.526 08:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:21.526 08:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:23.422 08:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:23.422 08:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:23.422 08:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:23.422 08:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:23.422 08:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:23.422 08:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:23.422 08:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:23.422 [global] 00:31:23.422 thread=1 00:31:23.422 invalidate=1 00:31:23.422 rw=write 00:31:23.422 time_based=1 00:31:23.422 runtime=1 00:31:23.422 ioengine=libaio 00:31:23.422 direct=1 00:31:23.422 bs=4096 00:31:23.422 iodepth=1 00:31:23.422 norandommap=0 00:31:23.422 numjobs=1 00:31:23.422 00:31:23.422 verify_dump=1 00:31:23.422 verify_backlog=512 00:31:23.422 verify_state_save=0 00:31:23.422 do_verify=1 00:31:23.422 verify=crc32c-intel 00:31:23.679 [job0] 00:31:23.679 filename=/dev/nvme0n1 00:31:23.679 Could not set queue depth (nvme0n1) 00:31:23.936 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:23.936 fio-3.35 00:31:23.936 Starting 1 thread 00:31:24.867 00:31:24.867 job0: (groupid=0, jobs=1): err= 0: pid=2670579: Wed Nov 27 08:14:18 2024 00:31:24.867 read: IOPS=886, BW=3544KiB/s (3630kB/s)(3548KiB/1001msec) 00:31:24.867 slat (nsec): min=6354, max=27398, avg=7570.80, stdev=2208.86 00:31:24.867 clat (usec): min=194, max=41966, avg=931.04, stdev=5264.47 00:31:24.867 lat (usec): min=201, max=41989, avg=938.61, stdev=5266.30 00:31:24.867 clat percentiles (usec): 00:31:24.867 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 206], 00:31:24.867 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 262], 60.00th=[ 265], 00:31:24.867 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 269], 95.00th=[ 273], 00:31:24.867 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:31:24.867 | 99.99th=[42206] 00:31:24.867 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:31:24.867 slat (nsec): min=9042, max=41850, avg=10065.78, stdev=1487.72 00:31:24.867 clat (usec): min=135, max=334, avg=149.78, stdev=22.26 00:31:24.867 lat (usec): min=144, max=376, avg=159.84, stdev=22.54 00:31:24.867 clat percentiles (usec): 00:31:24.867 | 1.00th=[ 139], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 143], 00:31:24.867 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 145], 60.00th=[ 147], 00:31:24.867 | 70.00th=[ 147], 80.00th=[ 149], 90.00th=[ 153], 95.00th=[ 178], 00:31:24.867 | 99.00th=[ 245], 99.50th=[ 245], 99.90th=[ 273], 99.95th=[ 334], 00:31:24.867 | 99.99th=[ 334] 00:31:24.867 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:24.867 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:24.867 lat (usec) : 250=73.05%, 500=26.16% 00:31:24.867 lat (msec) : 50=0.78% 00:31:24.867 cpu : usr=1.10%, sys=1.60%, ctx=1911, majf=0, minf=1 00:31:24.867 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:24.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.867 issued rwts: total=887,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.867 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:24.867 00:31:24.867 Run status group 0 (all jobs): 00:31:24.867 READ: bw=3544KiB/s (3630kB/s), 3544KiB/s-3544KiB/s (3630kB/s-3630kB/s), io=3548KiB (3633kB), run=1001-1001msec 00:31:24.867 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:31:24.867 00:31:24.867 Disk stats (read/write): 00:31:24.867 nvme0n1: ios=562/893, merge=0/0, ticks=762/133, in_queue=895, util=90.98% 00:31:25.123 08:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:25.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:25.123 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:25.123 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:25.123 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:25.123 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:25.123 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:25.123 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:25.124 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:25.124 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:25.124 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:25.124 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:25.124 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:25.124 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:25.124 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:25.124 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:25.124 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:25.124 rmmod nvme_tcp 00:31:25.124 rmmod nvme_fabrics 00:31:25.124 rmmod nvme_keyring 00:31:25.380 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:25.380 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:25.380 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:25.380 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2669866 ']' 00:31:25.380 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2669866 00:31:25.380 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2669866 ']' 00:31:25.380 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2669866 00:31:25.380 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:25.380 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:25.380 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2669866 00:31:25.380 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:25.380 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:25.380 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2669866' 00:31:25.380 killing process with pid 2669866 00:31:25.380 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2669866 00:31:25.380 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2669866 00:31:25.639 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:25.639 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:25.639 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:25.639 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:25.639 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:25.639 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:25.639 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:25.639 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:25.639 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:25.639 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.639 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.639 08:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.539 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:27.539 00:31:27.539 real 0m12.722s 00:31:27.539 user 0m24.191s 00:31:27.539 sys 0m5.824s 00:31:27.539 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:27.539 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:27.539 ************************************ 00:31:27.539 END TEST nvmf_nmic 00:31:27.539 ************************************ 00:31:27.539 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:27.539 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:27.539 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:27.539 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:27.799 ************************************ 00:31:27.799 START TEST nvmf_fio_target 00:31:27.799 ************************************ 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:27.799 * Looking for test storage... 00:31:27.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:27.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.799 --rc genhtml_branch_coverage=1 00:31:27.799 --rc genhtml_function_coverage=1 00:31:27.799 --rc genhtml_legend=1 00:31:27.799 --rc geninfo_all_blocks=1 00:31:27.799 --rc geninfo_unexecuted_blocks=1 00:31:27.799 00:31:27.799 ' 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:27.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.799 --rc genhtml_branch_coverage=1 00:31:27.799 --rc genhtml_function_coverage=1 00:31:27.799 --rc genhtml_legend=1 00:31:27.799 --rc geninfo_all_blocks=1 00:31:27.799 --rc geninfo_unexecuted_blocks=1 00:31:27.799 00:31:27.799 ' 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:27.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.799 --rc genhtml_branch_coverage=1 00:31:27.799 --rc genhtml_function_coverage=1 00:31:27.799 --rc genhtml_legend=1 00:31:27.799 --rc geninfo_all_blocks=1 00:31:27.799 --rc geninfo_unexecuted_blocks=1 00:31:27.799 00:31:27.799 ' 00:31:27.799 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:27.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.799 --rc genhtml_branch_coverage=1 00:31:27.800 --rc genhtml_function_coverage=1 00:31:27.800 --rc genhtml_legend=1 00:31:27.800 --rc geninfo_all_blocks=1 00:31:27.800 --rc geninfo_unexecuted_blocks=1 00:31:27.800 00:31:27.800 ' 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:27.800 08:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:33.212 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:33.213 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:33.213 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:33.213 Found net devices under 0000:86:00.0: cvl_0_0 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:33.213 Found net devices under 0000:86:00.1: cvl_0_1 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:33.213 08:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:33.213 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:33.213 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:33.213 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:33.213 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:33.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:33.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:31:33.213 00:31:33.213 --- 10.0.0.2 ping statistics --- 00:31:33.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.213 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:31:33.213 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:33.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:33.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:31:33.213 00:31:33.213 --- 10.0.0.1 ping statistics --- 00:31:33.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.213 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:31:33.213 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:33.213 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:33.213 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:33.213 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:33.213 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:33.213 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:33.213 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:33.213 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:33.213 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:33.213 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:33.213 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:33.213 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:33.214 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:33.214 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2674236 00:31:33.214 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:33.214 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2674236 00:31:33.214 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2674236 ']' 00:31:33.214 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.214 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:33.214 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.214 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:33.214 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:33.214 [2024-11-27 08:14:27.130831] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:33.214 [2024-11-27 08:14:27.131726] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:31:33.214 [2024-11-27 08:14:27.131759] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:33.214 [2024-11-27 08:14:27.200310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:33.214 [2024-11-27 08:14:27.242591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:33.214 [2024-11-27 08:14:27.242629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:33.214 [2024-11-27 08:14:27.242636] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:33.214 [2024-11-27 08:14:27.242643] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:33.214 [2024-11-27 08:14:27.242648] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:33.214 [2024-11-27 08:14:27.244185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:33.214 [2024-11-27 08:14:27.244205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:33.214 [2024-11-27 08:14:27.244233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:33.214 [2024-11-27 08:14:27.244235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.214 [2024-11-27 08:14:27.312315] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:33.214 [2024-11-27 08:14:27.312442] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:33.214 [2024-11-27 08:14:27.312639] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:33.214 [2024-11-27 08:14:27.312902] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:33.214 [2024-11-27 08:14:27.313086] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:34.151 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:34.151 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:34.151 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:34.151 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:34.151 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:34.151 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:34.152 08:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:34.152 [2024-11-27 08:14:28.165005] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:34.152 08:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:34.410 08:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:34.410 08:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:34.669 08:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:34.669 08:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:34.928 08:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:34.928 08:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:35.186 08:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:35.186 08:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:35.186 08:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:35.444 08:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:35.444 08:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:35.702 08:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:35.702 08:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:35.961 08:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:35.961 08:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:36.219 08:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:36.219 08:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:36.219 08:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:36.477 08:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:36.477 08:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:36.736 08:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:36.736 [2024-11-27 08:14:30.836972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.994 08:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:36.994 08:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:37.252 08:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:37.510 08:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:37.511 08:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:37.511 08:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:37.511 08:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:37.511 08:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:37.511 08:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:40.041 08:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:40.041 08:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:40.041 08:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:40.041 08:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:40.041 08:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:40.041 08:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:40.041 08:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:40.041 [global] 00:31:40.041 thread=1 00:31:40.041 invalidate=1 00:31:40.041 rw=write 00:31:40.041 time_based=1 00:31:40.041 runtime=1 00:31:40.041 ioengine=libaio 00:31:40.041 direct=1 00:31:40.041 bs=4096 00:31:40.041 iodepth=1 00:31:40.041 norandommap=0 00:31:40.041 numjobs=1 00:31:40.041 00:31:40.041 verify_dump=1 00:31:40.041 verify_backlog=512 00:31:40.041 verify_state_save=0 00:31:40.041 do_verify=1 00:31:40.041 verify=crc32c-intel 00:31:40.041 [job0] 00:31:40.041 filename=/dev/nvme0n1 00:31:40.041 [job1] 00:31:40.041 filename=/dev/nvme0n2 00:31:40.041 [job2] 00:31:40.041 filename=/dev/nvme0n3 00:31:40.041 [job3] 00:31:40.041 filename=/dev/nvme0n4 00:31:40.041 Could not set queue depth (nvme0n1) 00:31:40.041 Could not set queue depth (nvme0n2) 00:31:40.041 Could not set queue depth (nvme0n3) 00:31:40.041 Could not set queue depth (nvme0n4) 00:31:40.041 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:40.041 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:40.041 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:40.041 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:40.041 fio-3.35 00:31:40.041 Starting 4 threads 00:31:41.416 00:31:41.416 job0: (groupid=0, jobs=1): err= 0: pid=2675559: Wed Nov 27 08:14:35 2024 00:31:41.416 read: IOPS=1262, BW=5052KiB/s (5173kB/s)(5168KiB/1023msec) 00:31:41.416 slat (nsec): min=8344, max=26548, avg=9349.58, stdev=1633.75 00:31:41.416 clat (usec): min=200, max=42016, avg=551.78, stdev=3581.07 00:31:41.416 lat (usec): min=209, max=42038, avg=561.13, stdev=3581.99 00:31:41.416 clat percentiles (usec): 00:31:41.416 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 225], 00:31:41.416 | 30.00th=[ 227], 40.00th=[ 229], 50.00th=[ 231], 60.00th=[ 233], 00:31:41.416 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 269], 00:31:41.416 | 99.00th=[ 594], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:31:41.416 | 99.99th=[42206] 00:31:41.416 write: IOPS=1501, BW=6006KiB/s (6150kB/s)(6144KiB/1023msec); 0 zone resets 00:31:41.416 slat (nsec): min=10533, max=46369, avg=12633.82, stdev=2132.06 00:31:41.416 clat (usec): min=143, max=1520, avg=175.25, stdev=39.70 00:31:41.416 lat (usec): min=155, max=1538, avg=187.89, stdev=40.00 00:31:41.416 clat percentiles (usec): 00:31:41.416 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:31:41.416 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 172], 00:31:41.416 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 202], 95.00th=[ 215], 00:31:41.416 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 351], 99.95th=[ 1516], 00:31:41.416 | 99.99th=[ 1516] 00:31:41.416 bw ( KiB/s): min= 4896, max= 7392, per=33.02%, avg=6144.00, stdev=1764.94, samples=2 00:31:41.416 iops : min= 1224, max= 1848, avg=1536.00, stdev=441.23, samples=2 00:31:41.416 lat (usec) : 250=96.36%, 500=3.08%, 750=0.14%, 1000=0.04% 00:31:41.416 lat (msec) : 2=0.04%, 50=0.35% 00:31:41.416 cpu : usr=2.74%, sys=4.50%, ctx=2828, majf=0, minf=2 00:31:41.416 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.417 issued rwts: total=1292,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:41.417 job1: (groupid=0, jobs=1): err= 0: pid=2675577: Wed Nov 27 08:14:35 2024 00:31:41.417 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:31:41.417 slat (nsec): min=7264, max=37331, avg=8353.87, stdev=1477.86 00:31:41.417 clat (usec): min=202, max=469, avg=262.18, stdev=32.00 00:31:41.417 lat (usec): min=210, max=477, avg=270.54, stdev=32.05 00:31:41.417 clat percentiles (usec): 00:31:41.417 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 243], 00:31:41.417 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 262], 00:31:41.417 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 314], 00:31:41.417 | 99.00th=[ 441], 99.50th=[ 453], 99.90th=[ 465], 99.95th=[ 469], 00:31:41.417 | 99.99th=[ 469] 00:31:41.417 write: IOPS=2242, BW=8971KiB/s (9186kB/s)(8980KiB/1001msec); 0 zone resets 00:31:41.417 slat (usec): min=10, max=14793, avg=18.55, stdev=311.98 00:31:41.417 clat (usec): min=130, max=377, avg=174.21, stdev=21.17 00:31:41.417 lat (usec): min=141, max=15071, avg=192.76, stdev=314.87 00:31:41.417 clat percentiles (usec): 00:31:41.417 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:31:41.417 | 30.00th=[ 161], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:31:41.417 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 202], 95.00th=[ 212], 00:31:41.417 | 99.00th=[ 241], 99.50th=[ 255], 99.90th=[ 277], 99.95th=[ 277], 00:31:41.417 | 99.99th=[ 379] 00:31:41.417 bw ( KiB/s): min= 8744, max= 8744, per=47.00%, avg=8744.00, stdev= 0.00, samples=1 00:31:41.417 iops : min= 2186, max= 2186, avg=2186.00, stdev= 0.00, samples=1 00:31:41.417 lat (usec) : 250=71.47%, 500=28.53% 00:31:41.417 cpu : usr=4.10%, sys=6.50%, ctx=4295, majf=0, minf=1 00:31:41.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.417 issued rwts: total=2048,2245,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:41.417 job2: (groupid=0, jobs=1): err= 0: pid=2675583: Wed Nov 27 08:14:35 2024 00:31:41.417 read: IOPS=20, BW=81.3KiB/s (83.3kB/s)(84.0KiB/1033msec) 00:31:41.417 slat (nsec): min=9778, max=26110, avg=24879.67, stdev=3469.12 00:31:41.417 clat (usec): min=40870, max=41234, avg=40981.62, stdev=72.94 00:31:41.417 lat (usec): min=40896, max=41243, avg=41006.50, stdev=70.22 00:31:41.417 clat percentiles (usec): 00:31:41.417 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:41.417 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:41.417 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:41.417 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:41.417 | 99.99th=[41157] 00:31:41.417 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:31:41.417 slat (usec): min=11, max=40616, avg=120.92, stdev=1908.96 00:31:41.417 clat (usec): min=162, max=393, avg=209.92, stdev=25.86 00:31:41.417 lat (usec): min=174, max=40931, avg=330.84, stdev=1916.26 00:31:41.417 clat percentiles (usec): 00:31:41.417 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:31:41.417 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:31:41.417 | 70.00th=[ 219], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 251], 00:31:41.417 | 99.00th=[ 265], 99.50th=[ 314], 99.90th=[ 396], 99.95th=[ 396], 00:31:41.417 | 99.99th=[ 396] 00:31:41.417 bw ( KiB/s): min= 4096, max= 4096, per=22.01%, avg=4096.00, stdev= 0.00, samples=1 00:31:41.417 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:41.417 lat (usec) : 250=90.62%, 500=5.44% 00:31:41.417 lat (msec) : 50=3.94% 00:31:41.417 cpu : usr=0.58%, sys=0.78%, ctx=538, majf=0, minf=1 00:31:41.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.417 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:41.417 job3: (groupid=0, jobs=1): err= 0: pid=2675585: Wed Nov 27 08:14:35 2024 00:31:41.417 read: IOPS=23, BW=95.8KiB/s (98.1kB/s)(96.0KiB/1002msec) 00:31:41.417 slat (nsec): min=10387, max=37269, avg=15129.08, stdev=6442.19 00:31:41.417 clat (usec): min=323, max=41174, avg=36799.77, stdev=11879.05 00:31:41.417 lat (usec): min=333, max=41199, avg=36814.90, stdev=11879.92 00:31:41.417 clat percentiles (usec): 00:31:41.417 | 1.00th=[ 326], 5.00th=[ 351], 10.00th=[21890], 20.00th=[40633], 00:31:41.417 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:41.417 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:41.417 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:41.417 | 99.99th=[41157] 00:31:41.417 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:31:41.417 slat (nsec): min=10558, max=51741, avg=14609.52, stdev=6332.06 00:31:41.417 clat (usec): min=145, max=373, avg=211.51, stdev=37.42 00:31:41.417 lat (usec): min=157, max=385, avg=226.12, stdev=39.16 00:31:41.417 clat percentiles (usec): 00:31:41.417 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 180], 00:31:41.417 | 30.00th=[ 192], 40.00th=[ 202], 50.00th=[ 210], 60.00th=[ 217], 00:31:41.417 | 70.00th=[ 227], 80.00th=[ 243], 90.00th=[ 265], 95.00th=[ 273], 00:31:41.417 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 375], 99.95th=[ 375], 00:31:41.417 | 99.99th=[ 375] 00:31:41.417 bw ( KiB/s): min= 4096, max= 4096, per=22.01%, avg=4096.00, stdev= 0.00, samples=1 00:31:41.417 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:41.417 lat (usec) : 250=79.66%, 500=16.23% 00:31:41.417 lat (msec) : 50=4.10% 00:31:41.417 cpu : usr=0.40%, sys=1.10%, ctx=536, majf=0, minf=2 00:31:41.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.417 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:41.417 00:31:41.417 Run status group 0 (all jobs): 00:31:41.417 READ: bw=12.8MiB/s (13.4MB/s), 81.3KiB/s-8184KiB/s (83.3kB/s-8380kB/s), io=13.2MiB (13.9MB), run=1001-1033msec 00:31:41.417 WRITE: bw=18.2MiB/s (19.1MB/s), 1983KiB/s-8971KiB/s (2030kB/s-9186kB/s), io=18.8MiB (19.7MB), run=1001-1033msec 00:31:41.417 00:31:41.417 Disk stats (read/write): 00:31:41.417 nvme0n1: ios=1107/1536, merge=0/0, ticks=550/250, in_queue=800, util=85.86% 00:31:41.417 nvme0n2: ios=1559/1909, merge=0/0, ticks=1297/316, in_queue=1613, util=91.44% 00:31:41.417 nvme0n3: ios=75/512, merge=0/0, ticks=974/98, in_queue=1072, util=95.43% 00:31:41.417 nvme0n4: ios=75/512, merge=0/0, ticks=728/96, in_queue=824, util=93.68% 00:31:41.417 08:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:41.417 [global] 00:31:41.417 thread=1 00:31:41.417 invalidate=1 00:31:41.417 rw=randwrite 00:31:41.417 time_based=1 00:31:41.417 runtime=1 00:31:41.417 ioengine=libaio 00:31:41.417 direct=1 00:31:41.417 bs=4096 00:31:41.417 iodepth=1 00:31:41.417 norandommap=0 00:31:41.417 numjobs=1 00:31:41.417 00:31:41.417 verify_dump=1 00:31:41.417 verify_backlog=512 00:31:41.417 verify_state_save=0 00:31:41.417 do_verify=1 00:31:41.417 verify=crc32c-intel 00:31:41.417 [job0] 00:31:41.417 filename=/dev/nvme0n1 00:31:41.417 [job1] 00:31:41.417 filename=/dev/nvme0n2 00:31:41.417 [job2] 00:31:41.417 filename=/dev/nvme0n3 00:31:41.417 [job3] 00:31:41.417 filename=/dev/nvme0n4 00:31:41.417 Could not set queue depth (nvme0n1) 00:31:41.417 Could not set queue depth (nvme0n2) 00:31:41.417 Could not set queue depth (nvme0n3) 00:31:41.417 Could not set queue depth (nvme0n4) 00:31:41.676 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:41.676 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:41.676 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:41.676 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:41.676 fio-3.35 00:31:41.676 Starting 4 threads 00:31:43.050 00:31:43.050 job0: (groupid=0, jobs=1): err= 0: pid=2675953: Wed Nov 27 08:14:36 2024 00:31:43.050 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:31:43.050 slat (nsec): min=6388, max=31053, avg=7256.69, stdev=945.05 00:31:43.050 clat (usec): min=173, max=260, avg=204.58, stdev= 9.95 00:31:43.050 lat (usec): min=182, max=270, avg=211.84, stdev=10.05 00:31:43.050 clat percentiles (usec): 00:31:43.050 | 1.00th=[ 194], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 198], 00:31:43.050 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 202], 60.00th=[ 204], 00:31:43.050 | 70.00th=[ 206], 80.00th=[ 208], 90.00th=[ 212], 95.00th=[ 229], 00:31:43.050 | 99.00th=[ 247], 99.50th=[ 251], 99.90th=[ 255], 99.95th=[ 255], 00:31:43.050 | 99.99th=[ 262] 00:31:43.050 write: IOPS=2677, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec); 0 zone resets 00:31:43.050 slat (nsec): min=4801, max=37365, avg=9797.09, stdev=1322.50 00:31:43.050 clat (usec): min=129, max=320, avg=157.01, stdev=24.05 00:31:43.050 lat (usec): min=141, max=327, avg=166.81, stdev=23.88 00:31:43.051 clat percentiles (usec): 00:31:43.051 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 139], 00:31:43.051 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:31:43.051 | 70.00th=[ 169], 80.00th=[ 184], 90.00th=[ 188], 95.00th=[ 192], 00:31:43.051 | 99.00th=[ 249], 99.50th=[ 251], 99.90th=[ 262], 99.95th=[ 310], 00:31:43.051 | 99.99th=[ 322] 00:31:43.051 bw ( KiB/s): min=12288, max=12288, per=75.71%, avg=12288.00, stdev= 0.00, samples=1 00:31:43.051 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:43.051 lat (usec) : 250=99.35%, 500=0.65% 00:31:43.051 cpu : usr=2.40%, sys=4.70%, ctx=5240, majf=0, minf=1 00:31:43.051 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.051 issued rwts: total=2560,2680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.051 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:43.051 job1: (groupid=0, jobs=1): err= 0: pid=2675954: Wed Nov 27 08:14:36 2024 00:31:43.051 read: IOPS=26, BW=108KiB/s (110kB/s)(112KiB/1039msec) 00:31:43.051 slat (nsec): min=8632, max=28851, avg=20249.32, stdev=6490.26 00:31:43.051 clat (usec): min=251, max=42380, avg=32297.47, stdev=17025.35 00:31:43.051 lat (usec): min=259, max=42397, avg=32317.72, stdev=17027.17 00:31:43.051 clat percentiles (usec): 00:31:43.051 | 1.00th=[ 251], 5.00th=[ 262], 10.00th=[ 273], 20.00th=[ 400], 00:31:43.051 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:43.051 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:31:43.051 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:43.051 | 99.99th=[42206] 00:31:43.051 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:31:43.051 slat (nsec): min=10859, max=45034, avg=15081.34, stdev=4161.49 00:31:43.051 clat (usec): min=141, max=489, avg=236.03, stdev=58.13 00:31:43.051 lat (usec): min=153, max=503, avg=251.11, stdev=59.30 00:31:43.051 clat percentiles (usec): 00:31:43.051 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 182], 00:31:43.051 | 30.00th=[ 200], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 235], 00:31:43.051 | 70.00th=[ 247], 80.00th=[ 281], 90.00th=[ 322], 95.00th=[ 351], 00:31:43.051 | 99.00th=[ 404], 99.50th=[ 429], 99.90th=[ 490], 99.95th=[ 490], 00:31:43.051 | 99.99th=[ 490] 00:31:43.051 bw ( KiB/s): min= 4096, max= 4096, per=25.24%, avg=4096.00, stdev= 0.00, samples=1 00:31:43.051 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:43.051 lat (usec) : 250=67.96%, 500=27.96% 00:31:43.051 lat (msec) : 50=4.07% 00:31:43.051 cpu : usr=0.77%, sys=0.77%, ctx=542, majf=0, minf=1 00:31:43.051 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.051 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.051 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:43.051 job2: (groupid=0, jobs=1): err= 0: pid=2675955: Wed Nov 27 08:14:36 2024 00:31:43.051 read: IOPS=22, BW=89.7KiB/s (91.8kB/s)(92.0KiB/1026msec) 00:31:43.051 slat (nsec): min=9823, max=24370, avg=22224.52, stdev=4031.23 00:31:43.051 clat (usec): min=305, max=41085, avg=39179.22, stdev=8474.67 00:31:43.051 lat (usec): min=329, max=41109, avg=39201.44, stdev=8474.28 00:31:43.051 clat percentiles (usec): 00:31:43.051 | 1.00th=[ 306], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:43.051 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:43.051 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:43.051 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:43.051 | 99.99th=[41157] 00:31:43.051 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:31:43.051 slat (nsec): min=9949, max=37046, avg=11255.41, stdev=2159.91 00:31:43.051 clat (usec): min=146, max=887, avg=222.88, stdev=55.99 00:31:43.051 lat (usec): min=157, max=899, avg=234.14, stdev=56.12 00:31:43.051 clat percentiles (usec): 00:31:43.051 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 190], 00:31:43.051 | 30.00th=[ 204], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:31:43.051 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 258], 95.00th=[ 277], 00:31:43.051 | 99.00th=[ 326], 99.50th=[ 701], 99.90th=[ 889], 99.95th=[ 889], 00:31:43.051 | 99.99th=[ 889] 00:31:43.051 bw ( KiB/s): min= 4096, max= 4096, per=25.24%, avg=4096.00, stdev= 0.00, samples=1 00:31:43.051 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:43.051 lat (usec) : 250=82.62%, 500=12.52%, 750=0.56%, 1000=0.19% 00:31:43.051 lat (msec) : 50=4.11% 00:31:43.051 cpu : usr=0.20%, sys=0.68%, ctx=536, majf=0, minf=1 00:31:43.051 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.051 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.051 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:43.051 job3: (groupid=0, jobs=1): err= 0: pid=2675956: Wed Nov 27 08:14:36 2024 00:31:43.051 read: IOPS=21, BW=85.1KiB/s (87.1kB/s)(88.0KiB/1034msec) 00:31:43.051 slat (nsec): min=11085, max=26683, avg=24120.45, stdev=3053.39 00:31:43.051 clat (usec): min=40669, max=41326, avg=40961.86, stdev=112.60 00:31:43.051 lat (usec): min=40695, max=41348, avg=40985.98, stdev=112.58 00:31:43.051 clat percentiles (usec): 00:31:43.051 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:43.051 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:43.051 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:43.051 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:43.051 | 99.99th=[41157] 00:31:43.051 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:31:43.051 slat (nsec): min=10782, max=34401, avg=13085.20, stdev=2310.45 00:31:43.051 clat (usec): min=151, max=860, avg=236.95, stdev=57.40 00:31:43.051 lat (usec): min=163, max=875, avg=250.04, stdev=57.59 00:31:43.051 clat percentiles (usec): 00:31:43.051 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 188], 20.00th=[ 208], 00:31:43.051 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 237], 00:31:43.051 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 285], 95.00th=[ 306], 00:31:43.051 | 99.00th=[ 355], 99.50th=[ 709], 99.90th=[ 865], 99.95th=[ 865], 00:31:43.051 | 99.99th=[ 865] 00:31:43.051 bw ( KiB/s): min= 4096, max= 4096, per=25.24%, avg=4096.00, stdev= 0.00, samples=1 00:31:43.051 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:43.051 lat (usec) : 250=73.97%, 500=21.16%, 750=0.56%, 1000=0.19% 00:31:43.051 lat (msec) : 50=4.12% 00:31:43.051 cpu : usr=0.48%, sys=0.97%, ctx=536, majf=0, minf=1 00:31:43.051 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.051 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.051 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:43.051 00:31:43.051 Run status group 0 (all jobs): 00:31:43.051 READ: bw=9.90MiB/s (10.4MB/s), 85.1KiB/s-9.99MiB/s (87.1kB/s-10.5MB/s), io=10.3MiB (10.8MB), run=1001-1039msec 00:31:43.051 WRITE: bw=15.8MiB/s (16.6MB/s), 1971KiB/s-10.5MiB/s (2018kB/s-11.0MB/s), io=16.5MiB (17.3MB), run=1001-1039msec 00:31:43.051 00:31:43.051 Disk stats (read/write): 00:31:43.051 nvme0n1: ios=2097/2444, merge=0/0, ticks=434/375, in_queue=809, util=86.77% 00:31:43.051 nvme0n2: ios=46/512, merge=0/0, ticks=1600/106, in_queue=1706, util=89.54% 00:31:43.051 nvme0n3: ios=75/512, merge=0/0, ticks=764/113, in_queue=877, util=94.80% 00:31:43.051 nvme0n4: ios=74/512, merge=0/0, ticks=827/113, in_queue=940, util=94.03% 00:31:43.051 08:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:43.051 [global] 00:31:43.051 thread=1 00:31:43.051 invalidate=1 00:31:43.051 rw=write 00:31:43.051 time_based=1 00:31:43.051 runtime=1 00:31:43.051 ioengine=libaio 00:31:43.051 direct=1 00:31:43.051 bs=4096 00:31:43.051 iodepth=128 00:31:43.051 norandommap=0 00:31:43.051 numjobs=1 00:31:43.051 00:31:43.051 verify_dump=1 00:31:43.051 verify_backlog=512 00:31:43.051 verify_state_save=0 00:31:43.051 do_verify=1 00:31:43.051 verify=crc32c-intel 00:31:43.051 [job0] 00:31:43.051 filename=/dev/nvme0n1 00:31:43.051 [job1] 00:31:43.051 filename=/dev/nvme0n2 00:31:43.051 [job2] 00:31:43.051 filename=/dev/nvme0n3 00:31:43.051 [job3] 00:31:43.051 filename=/dev/nvme0n4 00:31:43.051 Could not set queue depth (nvme0n1) 00:31:43.051 Could not set queue depth (nvme0n2) 00:31:43.051 Could not set queue depth (nvme0n3) 00:31:43.051 Could not set queue depth (nvme0n4) 00:31:43.051 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:43.051 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:43.051 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:43.051 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:43.051 fio-3.35 00:31:43.051 Starting 4 threads 00:31:44.426 00:31:44.426 job0: (groupid=0, jobs=1): err= 0: pid=2676326: Wed Nov 27 08:14:38 2024 00:31:44.426 read: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec) 00:31:44.426 slat (nsec): min=1006, max=15898k, avg=99057.70, stdev=777387.92 00:31:44.426 clat (usec): min=3793, max=37255, avg=12943.46, stdev=5502.53 00:31:44.426 lat (usec): min=3800, max=37262, avg=13042.51, stdev=5553.07 00:31:44.426 clat percentiles (usec): 00:31:44.426 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 7504], 20.00th=[ 9372], 00:31:44.426 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[11076], 60.00th=[12518], 00:31:44.426 | 70.00th=[15139], 80.00th=[16581], 90.00th=[20579], 95.00th=[22676], 00:31:44.426 | 99.00th=[33162], 99.50th=[35390], 99.90th=[37487], 99.95th=[37487], 00:31:44.426 | 99.99th=[37487] 00:31:44.426 write: IOPS=4997, BW=19.5MiB/s (20.5MB/s)(19.8MiB/1013msec); 0 zone resets 00:31:44.426 slat (nsec): min=1812, max=14603k, avg=97241.77, stdev=663667.28 00:31:44.426 clat (usec): min=347, max=41945, avg=13607.73, stdev=7226.86 00:31:44.426 lat (usec): min=896, max=41956, avg=13704.98, stdev=7284.92 00:31:44.426 clat percentiles (usec): 00:31:44.426 | 1.00th=[ 2278], 5.00th=[ 5997], 10.00th=[ 7570], 20.00th=[ 9503], 00:31:44.426 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10421], 60.00th=[11731], 00:31:44.426 | 70.00th=[13960], 80.00th=[18744], 90.00th=[26084], 95.00th=[29492], 00:31:44.426 | 99.00th=[37487], 99.50th=[39584], 99.90th=[41681], 99.95th=[41681], 00:31:44.426 | 99.99th=[42206] 00:31:44.426 bw ( KiB/s): min=14904, max=24576, per=25.50%, avg=19740.00, stdev=6839.14, samples=2 00:31:44.426 iops : min= 3726, max= 6144, avg=4935.00, stdev=1709.78, samples=2 00:31:44.426 lat (usec) : 500=0.01% 00:31:44.426 lat (msec) : 2=0.44%, 4=0.57%, 10=33.71%, 20=50.38%, 50=14.88% 00:31:44.426 cpu : usr=3.06%, sys=4.94%, ctx=471, majf=0, minf=2 00:31:44.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:44.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:44.426 issued rwts: total=4608,5062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.426 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:44.426 job1: (groupid=0, jobs=1): err= 0: pid=2676327: Wed Nov 27 08:14:38 2024 00:31:44.426 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:31:44.426 slat (nsec): min=1197, max=22349k, avg=69929.42, stdev=733335.13 00:31:44.426 clat (usec): min=461, max=48932, avg=12898.02, stdev=6877.07 00:31:44.426 lat (usec): min=470, max=48957, avg=12967.95, stdev=6930.71 00:31:44.426 clat percentiles (usec): 00:31:44.426 | 1.00th=[ 1516], 5.00th=[ 4621], 10.00th=[ 5342], 20.00th=[ 8455], 00:31:44.426 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10814], 60.00th=[12649], 00:31:44.426 | 70.00th=[15270], 80.00th=[17695], 90.00th=[22414], 95.00th=[26608], 00:31:44.426 | 99.00th=[34341], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:31:44.426 | 99.99th=[49021] 00:31:44.426 write: IOPS=5583, BW=21.8MiB/s (22.9MB/s)(21.9MiB/1006msec); 0 zone resets 00:31:44.426 slat (usec): min=2, max=17714, avg=72.85, stdev=765.63 00:31:44.426 clat (usec): min=500, max=119290, avg=12071.80, stdev=11740.25 00:31:44.426 lat (usec): min=511, max=119296, avg=12144.65, stdev=11766.46 00:31:44.426 clat percentiles (usec): 00:31:44.426 | 1.00th=[ 881], 5.00th=[ 1827], 10.00th=[ 3130], 20.00th=[ 5997], 00:31:44.426 | 30.00th=[ 7635], 40.00th=[ 9765], 50.00th=[ 10945], 60.00th=[ 11994], 00:31:44.426 | 70.00th=[ 13173], 80.00th=[ 15795], 90.00th=[ 19006], 95.00th=[ 20317], 00:31:44.426 | 99.00th=[ 80217], 99.50th=[105382], 99.90th=[112722], 99.95th=[119014], 00:31:44.426 | 99.99th=[119014] 00:31:44.426 bw ( KiB/s): min=18784, max=25128, per=28.36%, avg=21956.00, stdev=4485.89, samples=2 00:31:44.426 iops : min= 4696, max= 6282, avg=5489.00, stdev=1121.47, samples=2 00:31:44.426 lat (usec) : 500=0.03%, 750=0.15%, 1000=1.43% 00:31:44.426 lat (msec) : 2=2.30%, 4=4.75%, 10=34.90%, 20=47.00%, 50=8.53% 00:31:44.426 lat (msec) : 100=0.55%, 250=0.36% 00:31:44.426 cpu : usr=4.38%, sys=5.57%, ctx=303, majf=0, minf=1 00:31:44.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:44.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:44.426 issued rwts: total=4608,5617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.426 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:44.426 job2: (groupid=0, jobs=1): err= 0: pid=2676328: Wed Nov 27 08:14:38 2024 00:31:44.426 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:31:44.426 slat (nsec): min=1147, max=21752k, avg=153768.74, stdev=1230459.06 00:31:44.426 clat (usec): min=7139, max=59867, avg=18637.81, stdev=10108.91 00:31:44.426 lat (usec): min=7147, max=59893, avg=18791.58, stdev=10205.77 00:31:44.426 clat percentiles (usec): 00:31:44.426 | 1.00th=[ 7635], 5.00th=[ 9372], 10.00th=[10814], 20.00th=[11469], 00:31:44.426 | 30.00th=[11863], 40.00th=[12780], 50.00th=[14353], 60.00th=[17171], 00:31:44.426 | 70.00th=[19268], 80.00th=[25822], 90.00th=[35390], 95.00th=[39584], 00:31:44.426 | 99.00th=[55837], 99.50th=[56361], 99.90th=[57410], 99.95th=[57410], 00:31:44.426 | 99.99th=[60031] 00:31:44.426 write: IOPS=3856, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1010msec); 0 zone resets 00:31:44.426 slat (nsec): min=1947, max=25048k, avg=108973.75, stdev=769127.48 00:31:44.426 clat (usec): min=6319, max=57174, avg=15568.25, stdev=6659.47 00:31:44.426 lat (usec): min=6334, max=57183, avg=15677.22, stdev=6686.75 00:31:44.426 clat percentiles (usec): 00:31:44.426 | 1.00th=[ 7046], 5.00th=[ 8848], 10.00th=[10814], 20.00th=[11469], 00:31:44.426 | 30.00th=[11600], 40.00th=[12649], 50.00th=[13566], 60.00th=[14222], 00:31:44.426 | 70.00th=[16188], 80.00th=[18482], 90.00th=[22676], 95.00th=[30540], 00:31:44.426 | 99.00th=[39060], 99.50th=[39060], 99.90th=[53216], 99.95th=[53216], 00:31:44.426 | 99.99th=[57410] 00:31:44.426 bw ( KiB/s): min=13760, max=16384, per=19.47%, avg=15072.00, stdev=1855.45, samples=2 00:31:44.426 iops : min= 3440, max= 4096, avg=3768.00, stdev=463.86, samples=2 00:31:44.426 lat (msec) : 10=7.11%, 20=69.69%, 50=22.58%, 100=0.62% 00:31:44.426 cpu : usr=1.78%, sys=4.46%, ctx=380, majf=0, minf=1 00:31:44.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:44.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:44.426 issued rwts: total=3584,3895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.426 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:44.426 job3: (groupid=0, jobs=1): err= 0: pid=2676329: Wed Nov 27 08:14:38 2024 00:31:44.426 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:31:44.426 slat (nsec): min=1414, max=15359k, avg=95078.00, stdev=825830.06 00:31:44.426 clat (usec): min=971, max=78955, avg=14777.55, stdev=10521.97 00:31:44.426 lat (usec): min=979, max=78971, avg=14872.63, stdev=10570.94 00:31:44.426 clat percentiles (usec): 00:31:44.426 | 1.00th=[ 1680], 5.00th=[ 3884], 10.00th=[ 7898], 20.00th=[ 9634], 00:31:44.426 | 30.00th=[10683], 40.00th=[11338], 50.00th=[12125], 60.00th=[12911], 00:31:44.426 | 70.00th=[16057], 80.00th=[19006], 90.00th=[22938], 95.00th=[28181], 00:31:44.426 | 99.00th=[76022], 99.50th=[77071], 99.90th=[78119], 99.95th=[79168], 00:31:44.426 | 99.99th=[79168] 00:31:44.426 write: IOPS=4974, BW=19.4MiB/s (20.4MB/s)(19.6MiB/1011msec); 0 zone resets 00:31:44.426 slat (usec): min=2, max=14680, avg=86.09, stdev=682.86 00:31:44.426 clat (usec): min=1332, max=63615, avg=11930.63, stdev=4655.72 00:31:44.426 lat (usec): min=1353, max=63618, avg=12016.72, stdev=4699.88 00:31:44.426 clat percentiles (usec): 00:31:44.426 | 1.00th=[ 3359], 5.00th=[ 5866], 10.00th=[ 6652], 20.00th=[ 8225], 00:31:44.426 | 30.00th=[10421], 40.00th=[11338], 50.00th=[11600], 60.00th=[11994], 00:31:44.426 | 70.00th=[12780], 80.00th=[14615], 90.00th=[16712], 95.00th=[17695], 00:31:44.426 | 99.00th=[27395], 99.50th=[30016], 99.90th=[49546], 99.95th=[49546], 00:31:44.426 | 99.99th=[63701] 00:31:44.426 bw ( KiB/s): min=16888, max=22328, per=25.33%, avg=19608.00, stdev=3846.66, samples=2 00:31:44.426 iops : min= 4222, max= 5582, avg=4902.00, stdev=961.67, samples=2 00:31:44.427 lat (usec) : 1000=0.03% 00:31:44.427 lat (msec) : 2=0.83%, 4=2.93%, 10=21.03%, 20=66.34%, 50=7.87% 00:31:44.427 lat (msec) : 100=0.98% 00:31:44.427 cpu : usr=4.06%, sys=7.82%, ctx=374, majf=0, minf=1 00:31:44.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:44.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:44.427 issued rwts: total=4608,5029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.427 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:44.427 00:31:44.427 Run status group 0 (all jobs): 00:31:44.427 READ: bw=67.1MiB/s (70.4MB/s), 13.9MiB/s-17.9MiB/s (14.5MB/s-18.8MB/s), io=68.0MiB (71.3MB), run=1006-1013msec 00:31:44.427 WRITE: bw=75.6MiB/s (79.3MB/s), 15.1MiB/s-21.8MiB/s (15.8MB/s-22.9MB/s), io=76.6MiB (80.3MB), run=1006-1013msec 00:31:44.427 00:31:44.427 Disk stats (read/write): 00:31:44.427 nvme0n1: ios=4146/4590, merge=0/0, ticks=40397/39309, in_queue=79706, util=87.07% 00:31:44.427 nvme0n2: ios=4124/4389, merge=0/0, ticks=52410/45758, in_queue=98168, util=89.66% 00:31:44.427 nvme0n3: ios=3129/3525, merge=0/0, ticks=27448/25343, in_queue=52791, util=92.52% 00:31:44.427 nvme0n4: ios=3641/4096, merge=0/0, ticks=54925/48336, in_queue=103261, util=95.40% 00:31:44.427 08:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:44.427 [global] 00:31:44.427 thread=1 00:31:44.427 invalidate=1 00:31:44.427 rw=randwrite 00:31:44.427 time_based=1 00:31:44.427 runtime=1 00:31:44.427 ioengine=libaio 00:31:44.427 direct=1 00:31:44.427 bs=4096 00:31:44.427 iodepth=128 00:31:44.427 norandommap=0 00:31:44.427 numjobs=1 00:31:44.427 00:31:44.427 verify_dump=1 00:31:44.427 verify_backlog=512 00:31:44.427 verify_state_save=0 00:31:44.427 do_verify=1 00:31:44.427 verify=crc32c-intel 00:31:44.427 [job0] 00:31:44.427 filename=/dev/nvme0n1 00:31:44.427 [job1] 00:31:44.427 filename=/dev/nvme0n2 00:31:44.427 [job2] 00:31:44.427 filename=/dev/nvme0n3 00:31:44.427 [job3] 00:31:44.427 filename=/dev/nvme0n4 00:31:44.427 Could not set queue depth (nvme0n1) 00:31:44.427 Could not set queue depth (nvme0n2) 00:31:44.427 Could not set queue depth (nvme0n3) 00:31:44.427 Could not set queue depth (nvme0n4) 00:31:44.699 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:44.699 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:44.699 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:44.699 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:44.699 fio-3.35 00:31:44.699 Starting 4 threads 00:31:46.089 00:31:46.089 job0: (groupid=0, jobs=1): err= 0: pid=2676697: Wed Nov 27 08:14:39 2024 00:31:46.089 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:31:46.089 slat (nsec): min=1080, max=12108k, avg=94109.54, stdev=751266.02 00:31:46.089 clat (usec): min=3716, max=43329, avg=13370.58, stdev=4908.28 00:31:46.089 lat (usec): min=3722, max=43334, avg=13464.69, stdev=4964.91 00:31:46.089 clat percentiles (usec): 00:31:46.089 | 1.00th=[ 4424], 5.00th=[ 7439], 10.00th=[ 8717], 20.00th=[10552], 00:31:46.089 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12518], 60.00th=[13435], 00:31:46.089 | 70.00th=[14091], 80.00th=[15664], 90.00th=[18744], 95.00th=[21627], 00:31:46.089 | 99.00th=[33817], 99.50th=[38536], 99.90th=[43254], 99.95th=[43254], 00:31:46.089 | 99.99th=[43254] 00:31:46.089 write: IOPS=4451, BW=17.4MiB/s (18.2MB/s)(17.5MiB/1004msec); 0 zone resets 00:31:46.089 slat (nsec): min=1989, max=13092k, avg=109172.80, stdev=699811.47 00:31:46.089 clat (usec): min=2220, max=48813, avg=16271.97, stdev=9802.76 00:31:46.089 lat (usec): min=2837, max=48820, avg=16381.14, stdev=9881.16 00:31:46.089 clat percentiles (usec): 00:31:46.089 | 1.00th=[ 3884], 5.00th=[ 6194], 10.00th=[ 7832], 20.00th=[ 8979], 00:31:46.089 | 30.00th=[ 9765], 40.00th=[10945], 50.00th=[12649], 60.00th=[14484], 00:31:46.089 | 70.00th=[18744], 80.00th=[22676], 90.00th=[32900], 95.00th=[35914], 00:31:46.089 | 99.00th=[43779], 99.50th=[44827], 99.90th=[49021], 99.95th=[49021], 00:31:46.089 | 99.99th=[49021] 00:31:46.089 bw ( KiB/s): min=16032, max=18666, per=23.72%, avg=17349.00, stdev=1862.52, samples=2 00:31:46.089 iops : min= 4008, max= 4666, avg=4337.00, stdev=465.28, samples=2 00:31:46.089 lat (msec) : 4=1.16%, 10=24.02%, 20=58.27%, 50=16.56% 00:31:46.089 cpu : usr=3.19%, sys=4.59%, ctx=340, majf=0, minf=1 00:31:46.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:46.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:46.089 issued rwts: total=4096,4469,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.089 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:46.089 job1: (groupid=0, jobs=1): err= 0: pid=2676698: Wed Nov 27 08:14:39 2024 00:31:46.089 read: IOPS=3741, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1005msec) 00:31:46.089 slat (nsec): min=1106, max=19164k, avg=125817.29, stdev=844037.88 00:31:46.089 clat (usec): min=850, max=51611, avg=14571.73, stdev=7970.89 00:31:46.089 lat (usec): min=4010, max=51616, avg=14697.54, stdev=8022.14 00:31:46.089 clat percentiles (usec): 00:31:46.089 | 1.00th=[ 8291], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10421], 00:31:46.089 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11338], 60.00th=[13042], 00:31:46.089 | 70.00th=[13698], 80.00th=[14615], 90.00th=[26870], 95.00th=[36963], 00:31:46.089 | 99.00th=[42730], 99.50th=[49021], 99.90th=[51643], 99.95th=[51643], 00:31:46.089 | 99.99th=[51643] 00:31:46.089 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:31:46.089 slat (usec): min=2, max=12784, avg=124.96, stdev=663.59 00:31:46.089 clat (usec): min=7842, max=57992, avg=17651.57, stdev=12443.67 00:31:46.089 lat (usec): min=7935, max=58032, avg=17776.54, stdev=12508.94 00:31:46.089 clat percentiles (usec): 00:31:46.089 | 1.00th=[ 8291], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9896], 00:31:46.089 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11338], 60.00th=[13042], 00:31:46.089 | 70.00th=[18744], 80.00th=[26346], 90.00th=[38536], 95.00th=[49021], 00:31:46.089 | 99.00th=[54789], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:31:46.089 | 99.99th=[57934] 00:31:46.089 bw ( KiB/s): min=13048, max=19720, per=22.40%, avg=16384.00, stdev=4717.82, samples=2 00:31:46.089 iops : min= 3262, max= 4930, avg=4096.00, stdev=1179.45, samples=2 00:31:46.089 lat (usec) : 1000=0.01% 00:31:46.089 lat (msec) : 10=19.13%, 20=61.01%, 50=17.35%, 100=2.49% 00:31:46.089 cpu : usr=1.99%, sys=3.88%, ctx=535, majf=0, minf=2 00:31:46.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:46.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:46.089 issued rwts: total=3760,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.089 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:46.089 job2: (groupid=0, jobs=1): err= 0: pid=2676699: Wed Nov 27 08:14:39 2024 00:31:46.089 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:31:46.089 slat (nsec): min=1583, max=10712k, avg=94442.10, stdev=610639.64 00:31:46.089 clat (usec): min=6391, max=22971, avg=11594.20, stdev=1809.52 00:31:46.089 lat (usec): min=6962, max=22977, avg=11688.64, stdev=1858.28 00:31:46.089 clat percentiles (usec): 00:31:46.089 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10421], 00:31:46.089 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:31:46.089 | 70.00th=[12256], 80.00th=[13042], 90.00th=[13960], 95.00th=[15139], 00:31:46.089 | 99.00th=[16188], 99.50th=[16581], 99.90th=[18220], 99.95th=[18744], 00:31:46.089 | 99.99th=[22938] 00:31:46.089 write: IOPS=5277, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1003msec); 0 zone resets 00:31:46.089 slat (usec): min=2, max=11668, avg=92.42, stdev=539.37 00:31:46.089 clat (usec): min=534, max=34223, avg=12601.45, stdev=4007.83 00:31:46.089 lat (usec): min=5789, max=34246, avg=12693.87, stdev=4037.90 00:31:46.089 clat percentiles (usec): 00:31:46.089 | 1.00th=[ 6325], 5.00th=[ 9241], 10.00th=[10552], 20.00th=[10945], 00:31:46.089 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:31:46.089 | 70.00th=[11863], 80.00th=[13042], 90.00th=[15270], 95.00th=[22414], 00:31:46.089 | 99.00th=[29492], 99.50th=[31327], 99.90th=[33424], 99.95th=[33424], 00:31:46.089 | 99.99th=[34341] 00:31:46.089 bw ( KiB/s): min=20439, max=20840, per=28.22%, avg=20639.50, stdev=283.55, samples=2 00:31:46.089 iops : min= 5109, max= 5210, avg=5159.50, stdev=71.42, samples=2 00:31:46.089 lat (usec) : 750=0.01% 00:31:46.089 lat (msec) : 10=10.88%, 20=85.63%, 50=3.48% 00:31:46.089 cpu : usr=3.39%, sys=6.09%, ctx=546, majf=0, minf=1 00:31:46.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:46.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:46.089 issued rwts: total=5120,5293,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.089 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:46.089 job3: (groupid=0, jobs=1): err= 0: pid=2676700: Wed Nov 27 08:14:39 2024 00:31:46.089 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:31:46.089 slat (nsec): min=1080, max=23528k, avg=117748.31, stdev=1005303.66 00:31:46.089 clat (usec): min=6186, max=54906, avg=15711.24, stdev=8926.11 00:31:46.089 lat (usec): min=6191, max=54936, avg=15828.99, stdev=9005.83 00:31:46.089 clat percentiles (usec): 00:31:46.089 | 1.00th=[ 6194], 5.00th=[ 8356], 10.00th=[ 9765], 20.00th=[10290], 00:31:46.089 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11863], 60.00th=[14222], 00:31:46.089 | 70.00th=[15401], 80.00th=[18482], 90.00th=[32375], 95.00th=[38536], 00:31:46.089 | 99.00th=[45351], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:31:46.089 | 99.99th=[54789] 00:31:46.089 write: IOPS=4536, BW=17.7MiB/s (18.6MB/s)(17.9MiB/1008msec); 0 zone resets 00:31:46.089 slat (usec): min=2, max=22638, avg=104.96, stdev=797.62 00:31:46.089 clat (usec): min=3139, max=61865, avg=13802.42, stdev=7346.98 00:31:46.089 lat (usec): min=3165, max=61895, avg=13907.38, stdev=7412.72 00:31:46.089 clat percentiles (usec): 00:31:46.089 | 1.00th=[ 3884], 5.00th=[ 6456], 10.00th=[ 8094], 20.00th=[10159], 00:31:46.089 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:31:46.089 | 70.00th=[12780], 80.00th=[17433], 90.00th=[20841], 95.00th=[28181], 00:31:46.089 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[50594], 00:31:46.089 | 99.99th=[61604] 00:31:46.089 bw ( KiB/s): min=16384, max=19184, per=24.32%, avg=17784.00, stdev=1979.90, samples=2 00:31:46.089 iops : min= 4096, max= 4796, avg=4446.00, stdev=494.97, samples=2 00:31:46.089 lat (msec) : 4=0.57%, 10=17.33%, 20=67.37%, 50=14.68%, 100=0.06% 00:31:46.089 cpu : usr=2.78%, sys=4.67%, ctx=421, majf=0, minf=1 00:31:46.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:46.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:46.089 issued rwts: total=4096,4573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.089 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:46.089 00:31:46.089 Run status group 0 (all jobs): 00:31:46.089 READ: bw=66.2MiB/s (69.4MB/s), 14.6MiB/s-19.9MiB/s (15.3MB/s-20.9MB/s), io=66.7MiB (69.9MB), run=1003-1008msec 00:31:46.089 WRITE: bw=71.4MiB/s (74.9MB/s), 15.9MiB/s-20.6MiB/s (16.7MB/s-21.6MB/s), io=72.0MiB (75.5MB), run=1003-1008msec 00:31:46.089 00:31:46.089 Disk stats (read/write): 00:31:46.089 nvme0n1: ios=3617/3863, merge=0/0, ticks=44518/57588, in_queue=102106, util=92.89% 00:31:46.089 nvme0n2: ios=3431/3584, merge=0/0, ticks=13033/15546, in_queue=28579, util=88.53% 00:31:46.089 nvme0n3: ios=4167/4608, merge=0/0, ticks=23889/27962, in_queue=51851, util=92.93% 00:31:46.089 nvme0n4: ios=3309/3584, merge=0/0, ticks=34933/31525, in_queue=66458, util=99.48% 00:31:46.089 08:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:46.089 08:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2676926 00:31:46.089 08:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:46.089 08:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:46.089 [global] 00:31:46.090 thread=1 00:31:46.090 invalidate=1 00:31:46.090 rw=read 00:31:46.090 time_based=1 00:31:46.090 runtime=10 00:31:46.090 ioengine=libaio 00:31:46.090 direct=1 00:31:46.090 bs=4096 00:31:46.090 iodepth=1 00:31:46.090 norandommap=1 00:31:46.090 numjobs=1 00:31:46.090 00:31:46.090 [job0] 00:31:46.090 filename=/dev/nvme0n1 00:31:46.090 [job1] 00:31:46.090 filename=/dev/nvme0n2 00:31:46.090 [job2] 00:31:46.090 filename=/dev/nvme0n3 00:31:46.090 [job3] 00:31:46.090 filename=/dev/nvme0n4 00:31:46.090 Could not set queue depth (nvme0n1) 00:31:46.090 Could not set queue depth (nvme0n2) 00:31:46.090 Could not set queue depth (nvme0n3) 00:31:46.090 Could not set queue depth (nvme0n4) 00:31:46.355 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:46.355 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:46.355 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:46.355 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:46.355 fio-3.35 00:31:46.355 Starting 4 threads 00:31:48.878 08:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:49.136 08:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:49.136 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=8663040, buflen=4096 00:31:49.136 fio: pid=2677075, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:49.397 08:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:49.397 08:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:49.397 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=17969152, buflen=4096 00:31:49.397 fio: pid=2677074, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:49.655 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=40562688, buflen=4096 00:31:49.655 fio: pid=2677072, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:49.655 08:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:49.655 08:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:49.914 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=51204096, buflen=4096 00:31:49.914 fio: pid=2677073, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:49.914 08:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:49.914 08:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:49.914 00:31:49.914 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2677072: Wed Nov 27 08:14:43 2024 00:31:49.914 read: IOPS=3144, BW=12.3MiB/s (12.9MB/s)(38.7MiB/3150msec) 00:31:49.914 slat (usec): min=6, max=11526, avg= 9.14, stdev=138.18 00:31:49.914 clat (usec): min=218, max=2587, avg=305.79, stdev=35.04 00:31:49.914 lat (usec): min=225, max=11979, avg=314.93, stdev=144.63 00:31:49.914 clat percentiles (usec): 00:31:49.914 | 1.00th=[ 243], 5.00th=[ 255], 10.00th=[ 277], 20.00th=[ 297], 00:31:49.914 | 30.00th=[ 302], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 310], 00:31:49.914 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 351], 00:31:49.914 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 441], 99.95th=[ 453], 00:31:49.914 | 99.99th=[ 2573] 00:31:49.914 bw ( KiB/s): min=12288, max=13324, per=36.90%, avg=12634.00, stdev=403.07, samples=6 00:31:49.914 iops : min= 3072, max= 3331, avg=3158.50, stdev=100.77, samples=6 00:31:49.914 lat (usec) : 250=2.01%, 500=97.97% 00:31:49.914 lat (msec) : 4=0.01% 00:31:49.914 cpu : usr=0.76%, sys=2.79%, ctx=9906, majf=0, minf=1 00:31:49.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.914 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.914 issued rwts: total=9904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:49.914 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2677073: Wed Nov 27 08:14:43 2024 00:31:49.914 read: IOPS=3702, BW=14.5MiB/s (15.2MB/s)(48.8MiB/3377msec) 00:31:49.914 slat (usec): min=6, max=16549, avg=11.42, stdev=233.78 00:31:49.914 clat (usec): min=196, max=9864, avg=255.09, stdev=101.66 00:31:49.914 lat (usec): min=203, max=16878, avg=266.51, stdev=257.54 00:31:49.914 clat percentiles (usec): 00:31:49.914 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 229], 20.00th=[ 239], 00:31:49.914 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:31:49.914 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 293], 00:31:49.914 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 408], 99.95th=[ 453], 00:31:49.914 | 99.99th=[ 5669] 00:31:49.914 bw ( KiB/s): min=14656, max=15592, per=43.80%, avg=14996.50, stdev=372.20, samples=6 00:31:49.914 iops : min= 3664, max= 3898, avg=3749.00, stdev=92.94, samples=6 00:31:49.914 lat (usec) : 250=44.23%, 500=55.73%, 750=0.02% 00:31:49.914 lat (msec) : 10=0.02% 00:31:49.914 cpu : usr=0.95%, sys=3.35%, ctx=12506, majf=0, minf=2 00:31:49.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.914 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.914 issued rwts: total=12502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:49.914 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2677074: Wed Nov 27 08:14:43 2024 00:31:49.914 read: IOPS=1484, BW=5936KiB/s (6079kB/s)(17.1MiB/2956msec) 00:31:49.914 slat (nsec): min=6501, max=56797, avg=10172.99, stdev=5710.50 00:31:49.914 clat (usec): min=183, max=42056, avg=656.41, stdev=3925.85 00:31:49.914 lat (usec): min=208, max=42068, avg=666.58, stdev=3926.21 00:31:49.914 clat percentiles (usec): 00:31:49.914 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 247], 00:31:49.914 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:31:49.914 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 318], 95.00th=[ 330], 00:31:49.914 | 99.00th=[ 429], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:31:49.914 | 99.99th=[42206] 00:31:49.914 bw ( KiB/s): min= 96, max=11008, per=20.45%, avg=7001.60, stdev=4117.81, samples=5 00:31:49.914 iops : min= 24, max= 2752, avg=1750.40, stdev=1029.45, samples=5 00:31:49.914 lat (usec) : 250=22.08%, 500=76.94%, 750=0.02% 00:31:49.914 lat (msec) : 50=0.93% 00:31:49.914 cpu : usr=0.61%, sys=2.37%, ctx=4388, majf=0, minf=2 00:31:49.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.914 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.914 issued rwts: total=4388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:49.914 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2677075: Wed Nov 27 08:14:43 2024 00:31:49.914 read: IOPS=763, BW=3054KiB/s (3127kB/s)(8460KiB/2770msec) 00:31:49.914 slat (nsec): min=7160, max=48182, avg=8820.96, stdev=3113.48 00:31:49.914 clat (usec): min=210, max=41872, avg=1288.61, stdev=6434.54 00:31:49.914 lat (usec): min=225, max=41900, avg=1297.43, stdev=6436.46 00:31:49.914 clat percentiles (usec): 00:31:49.914 | 1.00th=[ 227], 5.00th=[ 233], 10.00th=[ 235], 20.00th=[ 239], 00:31:49.914 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:31:49.914 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 273], 00:31:49.914 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:49.914 | 99.99th=[41681] 00:31:49.914 bw ( KiB/s): min= 96, max= 7720, per=9.85%, avg=3374.40, stdev=3363.24, samples=5 00:31:49.914 iops : min= 24, max= 1930, avg=843.60, stdev=840.81, samples=5 00:31:49.914 lat (usec) : 250=67.20%, 500=30.15%, 750=0.05% 00:31:49.914 lat (msec) : 50=2.55% 00:31:49.914 cpu : usr=0.33%, sys=1.37%, ctx=2116, majf=0, minf=2 00:31:49.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.914 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.914 issued rwts: total=2116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:49.914 00:31:49.914 Run status group 0 (all jobs): 00:31:49.914 READ: bw=33.4MiB/s (35.1MB/s), 3054KiB/s-14.5MiB/s (3127kB/s-15.2MB/s), io=113MiB (118MB), run=2770-3377msec 00:31:49.914 00:31:49.914 Disk stats (read/write): 00:31:49.914 nvme0n1: ios=9812/0, merge=0/0, ticks=2969/0, in_queue=2969, util=95.16% 00:31:49.914 nvme0n2: ios=12501/0, merge=0/0, ticks=3113/0, in_queue=3113, util=94.97% 00:31:49.914 nvme0n3: ios=4385/0, merge=0/0, ticks=2733/0, in_queue=2733, util=96.45% 00:31:49.914 nvme0n4: ios=2111/0, merge=0/0, ticks=2546/0, in_queue=2546, util=96.41% 00:31:49.914 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:49.914 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:50.173 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:50.173 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:50.431 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:50.431 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:50.689 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:50.689 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:50.947 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:50.947 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2676926 00:31:50.947 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:50.947 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:50.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:50.947 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:50.947 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:50.947 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:50.947 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:50.947 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:50.947 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:50.947 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:50.947 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:50.947 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:50.947 nvmf hotplug test: fio failed as expected 00:31:50.947 08:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:51.205 rmmod nvme_tcp 00:31:51.205 rmmod nvme_fabrics 00:31:51.205 rmmod nvme_keyring 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2674236 ']' 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2674236 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2674236 ']' 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2674236 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:51.205 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2674236 00:31:51.464 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:51.464 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:51.464 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2674236' 00:31:51.464 killing process with pid 2674236 00:31:51.464 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2674236 00:31:51.464 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2674236 00:31:51.464 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:51.464 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:51.464 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:51.464 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:51.464 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:51.464 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:51.464 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:51.464 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:51.464 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:51.464 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.464 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:51.464 08:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:54.001 00:31:54.001 real 0m25.920s 00:31:54.001 user 1m31.032s 00:31:54.001 sys 0m10.923s 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:54.001 ************************************ 00:31:54.001 END TEST nvmf_fio_target 00:31:54.001 ************************************ 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:54.001 ************************************ 00:31:54.001 START TEST nvmf_bdevio 00:31:54.001 ************************************ 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:54.001 * Looking for test storage... 00:31:54.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:54.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.001 --rc genhtml_branch_coverage=1 00:31:54.001 --rc genhtml_function_coverage=1 00:31:54.001 --rc genhtml_legend=1 00:31:54.001 --rc geninfo_all_blocks=1 00:31:54.001 --rc geninfo_unexecuted_blocks=1 00:31:54.001 00:31:54.001 ' 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:54.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.001 --rc genhtml_branch_coverage=1 00:31:54.001 --rc genhtml_function_coverage=1 00:31:54.001 --rc genhtml_legend=1 00:31:54.001 --rc geninfo_all_blocks=1 00:31:54.001 --rc geninfo_unexecuted_blocks=1 00:31:54.001 00:31:54.001 ' 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:54.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.001 --rc genhtml_branch_coverage=1 00:31:54.001 --rc genhtml_function_coverage=1 00:31:54.001 --rc genhtml_legend=1 00:31:54.001 --rc geninfo_all_blocks=1 00:31:54.001 --rc geninfo_unexecuted_blocks=1 00:31:54.001 00:31:54.001 ' 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:54.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.001 --rc genhtml_branch_coverage=1 00:31:54.001 --rc genhtml_function_coverage=1 00:31:54.001 --rc genhtml_legend=1 00:31:54.001 --rc geninfo_all_blocks=1 00:31:54.001 --rc geninfo_unexecuted_blocks=1 00:31:54.001 00:31:54.001 ' 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:54.001 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:54.002 08:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.275 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:59.276 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:59.276 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:59.276 Found net devices under 0000:86:00.0: cvl_0_0 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:59.276 Found net devices under 0000:86:00.1: cvl_0_1 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:59.276 08:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:59.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:31:59.276 00:31:59.276 --- 10.0.0.2 ping statistics --- 00:31:59.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.276 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:31:59.276 00:31:59.276 --- 10.0.0.1 ping statistics --- 00:31:59.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.276 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2681306 00:31:59.276 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:59.277 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2681306 00:31:59.277 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2681306 ']' 00:31:59.277 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.277 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:59.277 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.277 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:59.277 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:59.277 [2024-11-27 08:14:53.186741] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:59.277 [2024-11-27 08:14:53.187636] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:31:59.277 [2024-11-27 08:14:53.187671] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.277 [2024-11-27 08:14:53.252439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:59.277 [2024-11-27 08:14:53.293659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.277 [2024-11-27 08:14:53.293697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.277 [2024-11-27 08:14:53.293705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.277 [2024-11-27 08:14:53.293711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.277 [2024-11-27 08:14:53.293716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.277 [2024-11-27 08:14:53.295322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:59.277 [2024-11-27 08:14:53.295431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:59.277 [2024-11-27 08:14:53.295560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:59.277 [2024-11-27 08:14:53.295561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:59.277 [2024-11-27 08:14:53.364657] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:59.277 [2024-11-27 08:14:53.365375] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:59.277 [2024-11-27 08:14:53.365652] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:59.277 [2024-11-27 08:14:53.365987] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:59.277 [2024-11-27 08:14:53.366038] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:59.536 [2024-11-27 08:14:53.444311] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:59.536 Malloc0 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:59.536 [2024-11-27 08:14:53.512271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:59.536 { 00:31:59.536 "params": { 00:31:59.536 "name": "Nvme$subsystem", 00:31:59.536 "trtype": "$TEST_TRANSPORT", 00:31:59.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:59.536 "adrfam": "ipv4", 00:31:59.536 "trsvcid": "$NVMF_PORT", 00:31:59.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:59.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:59.536 "hdgst": ${hdgst:-false}, 00:31:59.536 "ddgst": ${ddgst:-false} 00:31:59.536 }, 00:31:59.536 "method": "bdev_nvme_attach_controller" 00:31:59.536 } 00:31:59.536 EOF 00:31:59.536 )") 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:59.536 08:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:59.536 "params": { 00:31:59.536 "name": "Nvme1", 00:31:59.536 "trtype": "tcp", 00:31:59.536 "traddr": "10.0.0.2", 00:31:59.536 "adrfam": "ipv4", 00:31:59.536 "trsvcid": "4420", 00:31:59.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:59.536 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:59.536 "hdgst": false, 00:31:59.536 "ddgst": false 00:31:59.536 }, 00:31:59.536 "method": "bdev_nvme_attach_controller" 00:31:59.536 }' 00:31:59.536 [2024-11-27 08:14:53.561762] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:31:59.536 [2024-11-27 08:14:53.561805] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2681333 ] 00:31:59.536 [2024-11-27 08:14:53.625934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:59.795 [2024-11-27 08:14:53.670810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.795 [2024-11-27 08:14:53.670906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.795 [2024-11-27 08:14:53.670908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:59.795 I/O targets: 00:31:59.795 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:59.795 00:31:59.795 00:31:59.795 CUnit - A unit testing framework for C - Version 2.1-3 00:31:59.795 http://cunit.sourceforge.net/ 00:31:59.795 00:31:59.795 00:31:59.795 Suite: bdevio tests on: Nvme1n1 00:31:59.795 Test: blockdev write read block ...passed 00:32:00.054 Test: blockdev write zeroes read block ...passed 00:32:00.054 Test: blockdev write zeroes read no split ...passed 00:32:00.054 Test: blockdev write zeroes read split ...passed 00:32:00.054 Test: blockdev write zeroes read split partial ...passed 00:32:00.054 Test: blockdev reset ...[2024-11-27 08:14:53.971544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:00.054 [2024-11-27 08:14:53.971610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1b350 (9): Bad file descriptor 00:32:00.054 [2024-11-27 08:14:54.064971] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:00.054 passed 00:32:00.054 Test: blockdev write read 8 blocks ...passed 00:32:00.054 Test: blockdev write read size > 128k ...passed 00:32:00.054 Test: blockdev write read invalid size ...passed 00:32:00.054 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:00.054 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:00.054 Test: blockdev write read max offset ...passed 00:32:00.313 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:00.313 Test: blockdev writev readv 8 blocks ...passed 00:32:00.313 Test: blockdev writev readv 30 x 1block ...passed 00:32:00.313 Test: blockdev writev readv block ...passed 00:32:00.313 Test: blockdev writev readv size > 128k ...passed 00:32:00.313 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:00.313 Test: blockdev comparev and writev ...[2024-11-27 08:14:54.315910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:00.313 [2024-11-27 08:14:54.315940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.313 [2024-11-27 08:14:54.315959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:00.313 [2024-11-27 08:14:54.315968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.313 [2024-11-27 08:14:54.316266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:00.313 [2024-11-27 08:14:54.316282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:00.313 [2024-11-27 08:14:54.316297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:00.313 [2024-11-27 08:14:54.316305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:00.313 [2024-11-27 08:14:54.316606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:00.313 [2024-11-27 08:14:54.316617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:00.313 [2024-11-27 08:14:54.316630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:00.313 [2024-11-27 08:14:54.316638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:00.313 [2024-11-27 08:14:54.316933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:00.313 [2024-11-27 08:14:54.316945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:00.313 [2024-11-27 08:14:54.316961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:00.313 [2024-11-27 08:14:54.316968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:00.313 passed 00:32:00.313 Test: blockdev nvme passthru rw ...passed 00:32:00.313 Test: blockdev nvme passthru vendor specific ...[2024-11-27 08:14:54.399318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:00.313 [2024-11-27 08:14:54.399336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:00.313 [2024-11-27 08:14:54.399451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:00.313 [2024-11-27 08:14:54.399462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:00.313 [2024-11-27 08:14:54.399576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:00.313 [2024-11-27 08:14:54.399586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:00.313 [2024-11-27 08:14:54.399701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:00.313 [2024-11-27 08:14:54.399711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:00.313 passed 00:32:00.313 Test: blockdev nvme admin passthru ...passed 00:32:00.573 Test: blockdev copy ...passed 00:32:00.573 00:32:00.573 Run Summary: Type Total Ran Passed Failed Inactive 00:32:00.573 suites 1 1 n/a 0 0 00:32:00.573 tests 23 23 23 0 0 00:32:00.573 asserts 152 152 152 0 n/a 00:32:00.573 00:32:00.573 Elapsed time = 1.258 seconds 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:00.573 rmmod nvme_tcp 00:32:00.573 rmmod nvme_fabrics 00:32:00.573 rmmod nvme_keyring 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2681306 ']' 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2681306 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2681306 ']' 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2681306 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:00.573 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2681306 00:32:00.833 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:00.833 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:00.833 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2681306' 00:32:00.833 killing process with pid 2681306 00:32:00.833 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2681306 00:32:00.833 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2681306 00:32:00.833 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:00.833 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:00.833 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:00.833 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:00.833 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:00.833 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:00.833 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:00.833 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:00.833 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:00.833 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.833 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.833 08:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.369 08:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:03.369 00:32:03.369 real 0m9.317s 00:32:03.369 user 0m8.494s 00:32:03.369 sys 0m4.756s 00:32:03.369 08:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:03.369 08:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:03.369 ************************************ 00:32:03.369 END TEST nvmf_bdevio 00:32:03.369 ************************************ 00:32:03.369 08:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:03.369 00:32:03.369 real 4m23.320s 00:32:03.369 user 9m2.937s 00:32:03.369 sys 1m45.326s 00:32:03.369 08:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:03.369 08:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:03.369 ************************************ 00:32:03.369 END TEST nvmf_target_core_interrupt_mode 00:32:03.369 ************************************ 00:32:03.369 08:14:57 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:03.369 08:14:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:03.369 08:14:57 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:03.369 08:14:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:03.369 ************************************ 00:32:03.369 START TEST nvmf_interrupt 00:32:03.369 ************************************ 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:03.369 * Looking for test storage... 00:32:03.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:03.369 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:03.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.370 --rc genhtml_branch_coverage=1 00:32:03.370 --rc genhtml_function_coverage=1 00:32:03.370 --rc genhtml_legend=1 00:32:03.370 --rc geninfo_all_blocks=1 00:32:03.370 --rc geninfo_unexecuted_blocks=1 00:32:03.370 00:32:03.370 ' 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:03.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.370 --rc genhtml_branch_coverage=1 00:32:03.370 --rc genhtml_function_coverage=1 00:32:03.370 --rc genhtml_legend=1 00:32:03.370 --rc geninfo_all_blocks=1 00:32:03.370 --rc geninfo_unexecuted_blocks=1 00:32:03.370 00:32:03.370 ' 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:03.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.370 --rc genhtml_branch_coverage=1 00:32:03.370 --rc genhtml_function_coverage=1 00:32:03.370 --rc genhtml_legend=1 00:32:03.370 --rc geninfo_all_blocks=1 00:32:03.370 --rc geninfo_unexecuted_blocks=1 00:32:03.370 00:32:03.370 ' 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:03.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.370 --rc genhtml_branch_coverage=1 00:32:03.370 --rc genhtml_function_coverage=1 00:32:03.370 --rc genhtml_legend=1 00:32:03.370 --rc geninfo_all_blocks=1 00:32:03.370 --rc geninfo_unexecuted_blocks=1 00:32:03.370 00:32:03.370 ' 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.370 08:14:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:03.371 08:14:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.371 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:03.371 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:03.371 08:14:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:03.371 08:14:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:08.648 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:08.648 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:08.648 Found net devices under 0000:86:00.0: cvl_0_0 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:08.648 Found net devices under 0000:86:00.1: cvl_0_1 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:08.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:08.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:32:08.648 00:32:08.648 --- 10.0.0.2 ping statistics --- 00:32:08.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.648 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:32:08.648 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:08.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:08.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:32:08.649 00:32:08.649 --- 10.0.0.1 ping statistics --- 00:32:08.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.649 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2685089 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2685089 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2685089 ']' 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:08.649 08:15:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:08.649 [2024-11-27 08:15:02.729744] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:08.649 [2024-11-27 08:15:02.730748] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:32:08.649 [2024-11-27 08:15:02.730786] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:08.908 [2024-11-27 08:15:02.798409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:08.908 [2024-11-27 08:15:02.840584] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:08.908 [2024-11-27 08:15:02.840621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:08.908 [2024-11-27 08:15:02.840631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:08.908 [2024-11-27 08:15:02.840637] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:08.908 [2024-11-27 08:15:02.840642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:08.908 [2024-11-27 08:15:02.841843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.908 [2024-11-27 08:15:02.841847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.908 [2024-11-27 08:15:02.910778] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:08.908 [2024-11-27 08:15:02.911004] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:08.908 [2024-11-27 08:15:02.911073] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:08.908 08:15:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:08.908 08:15:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:08.908 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:08.908 08:15:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:08.908 08:15:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:08.908 08:15:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:08.908 08:15:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:08.908 08:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:08.908 08:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:08.908 08:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:08.908 5000+0 records in 00:32:08.908 5000+0 records out 00:32:08.908 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0178406 s, 574 MB/s 00:32:08.908 08:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:08.908 08:15:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.908 08:15:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:09.167 AIO0 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:09.168 [2024-11-27 08:15:03.034584] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:09.168 [2024-11-27 08:15:03.058813] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2685089 0 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2685089 0 idle 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2685089 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2685089 -w 256 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2685089 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.24 reactor_0' 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2685089 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.24 reactor_0 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2685089 1 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2685089 1 idle 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2685089 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2685089 -w 256 00:32:09.168 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:09.426 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2685141 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:32:09.426 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2685141 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:32:09.426 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:09.426 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2685280 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2685089 0 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2685089 0 busy 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2685089 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2685089 -w 256 00:32:09.427 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2685089 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.46 reactor_0' 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2685089 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.46 reactor_0 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2685089 1 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2685089 1 busy 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2685089 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2685089 -w 256 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:09.685 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2685141 root 20 0 128.2g 46848 33792 R 93.8 0.0 0:00.29 reactor_1' 00:32:09.943 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2685141 root 20 0 128.2g 46848 33792 R 93.8 0.0 0:00.29 reactor_1 00:32:09.943 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:09.943 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:09.943 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:32:09.943 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:32:09.943 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:09.943 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:09.943 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:09.943 08:15:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:09.943 08:15:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2685280 00:32:19.913 Initializing NVMe Controllers 00:32:19.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:19.913 Controller IO queue size 256, less than required. 00:32:19.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:19.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:19.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:19.913 Initialization complete. Launching workers. 00:32:19.913 ======================================================== 00:32:19.913 Latency(us) 00:32:19.913 Device Information : IOPS MiB/s Average min max 00:32:19.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16146.80 63.07 15862.32 2785.77 19502.24 00:32:19.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 15787.60 61.67 16224.73 4282.16 56108.02 00:32:19.913 ======================================================== 00:32:19.913 Total : 31934.40 124.74 16041.49 2785.77 56108.02 00:32:19.913 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2685089 0 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2685089 0 idle 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2685089 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2685089 -w 256 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2685089 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.22 reactor_0' 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2685089 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.22 reactor_0 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:19.913 08:15:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2685089 1 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2685089 1 idle 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2685089 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2685089 -w 256 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2685141 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:09.99 reactor_1' 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2685141 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:09.99 reactor_1 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:19.914 08:15:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:20.172 08:15:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:20.172 08:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:20.172 08:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:20.172 08:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:20.172 08:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2685089 0 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2685089 0 idle 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2685089 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2685089 -w 256 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2685089 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.40 reactor_0' 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2685089 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.40 reactor_0 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2685089 1 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2685089 1 idle 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2685089 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2685089 -w 256 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2685141 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.05 reactor_1' 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2685141 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.05 reactor_1 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:22.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:22.706 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:22.964 rmmod nvme_tcp 00:32:22.964 rmmod nvme_fabrics 00:32:22.964 rmmod nvme_keyring 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2685089 ']' 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2685089 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2685089 ']' 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2685089 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2685089 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2685089' 00:32:22.964 killing process with pid 2685089 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2685089 00:32:22.964 08:15:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2685089 00:32:23.221 08:15:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:23.221 08:15:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:23.221 08:15:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:23.221 08:15:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:23.221 08:15:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:23.221 08:15:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:23.221 08:15:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:23.221 08:15:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:23.221 08:15:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:23.221 08:15:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.221 08:15:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:23.221 08:15:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.119 08:15:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:25.119 00:32:25.119 real 0m22.145s 00:32:25.119 user 0m39.545s 00:32:25.119 sys 0m7.931s 00:32:25.119 08:15:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.119 08:15:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:25.119 ************************************ 00:32:25.119 END TEST nvmf_interrupt 00:32:25.119 ************************************ 00:32:25.377 00:32:25.377 real 26m41.127s 00:32:25.377 user 55m53.821s 00:32:25.377 sys 8m50.499s 00:32:25.377 08:15:19 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.377 08:15:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:25.377 ************************************ 00:32:25.377 END TEST nvmf_tcp 00:32:25.377 ************************************ 00:32:25.377 08:15:19 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:25.377 08:15:19 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:25.377 08:15:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:25.377 08:15:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.377 08:15:19 -- common/autotest_common.sh@10 -- # set +x 00:32:25.377 ************************************ 00:32:25.377 START TEST spdkcli_nvmf_tcp 00:32:25.377 ************************************ 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:25.377 * Looking for test storage... 00:32:25.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:25.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.377 --rc genhtml_branch_coverage=1 00:32:25.377 --rc genhtml_function_coverage=1 00:32:25.377 --rc genhtml_legend=1 00:32:25.377 --rc geninfo_all_blocks=1 00:32:25.377 --rc geninfo_unexecuted_blocks=1 00:32:25.377 00:32:25.377 ' 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:25.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.377 --rc genhtml_branch_coverage=1 00:32:25.377 --rc genhtml_function_coverage=1 00:32:25.377 --rc genhtml_legend=1 00:32:25.377 --rc geninfo_all_blocks=1 00:32:25.377 --rc geninfo_unexecuted_blocks=1 00:32:25.377 00:32:25.377 ' 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:25.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.377 --rc genhtml_branch_coverage=1 00:32:25.377 --rc genhtml_function_coverage=1 00:32:25.377 --rc genhtml_legend=1 00:32:25.377 --rc geninfo_all_blocks=1 00:32:25.377 --rc geninfo_unexecuted_blocks=1 00:32:25.377 00:32:25.377 ' 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:25.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.377 --rc genhtml_branch_coverage=1 00:32:25.377 --rc genhtml_function_coverage=1 00:32:25.377 --rc genhtml_legend=1 00:32:25.377 --rc geninfo_all_blocks=1 00:32:25.377 --rc geninfo_unexecuted_blocks=1 00:32:25.377 00:32:25.377 ' 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.377 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.635 08:15:19 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.635 08:15:19 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:25.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2688354 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2688354 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2688354 ']' 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:25.636 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:25.636 [2024-11-27 08:15:19.549556] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:32:25.636 [2024-11-27 08:15:19.549607] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688354 ] 00:32:25.636 [2024-11-27 08:15:19.610579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:25.636 [2024-11-27 08:15:19.654691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.636 [2024-11-27 08:15:19.654693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.894 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:25.894 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:25.894 08:15:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:25.894 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:25.894 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:25.894 08:15:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:25.894 08:15:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:25.894 08:15:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:25.894 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:25.894 08:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:25.894 08:15:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:25.894 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:25.894 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:25.894 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:25.894 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:25.894 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:25.894 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:25.894 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:25.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:25.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:25.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:25.894 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:25.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:25.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:25.894 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:25.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:25.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:25.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:25.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:25.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:25.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:25.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:25.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:25.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:25.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:25.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:25.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:25.894 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:25.894 ' 00:32:28.487 [2024-11-27 08:15:22.295281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:29.862 [2024-11-27 08:15:23.551453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:31.764 [2024-11-27 08:15:25.814453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:33.665 [2024-11-27 08:15:27.744423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:35.566 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:35.566 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:35.566 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:35.566 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:35.566 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:35.566 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:35.566 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:35.566 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:35.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:35.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:35.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:35.566 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:35.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:35.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:35.566 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:35.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:35.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:35.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:35.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:35.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:35.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:35.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:35.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:35.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:35.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:35.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:35.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:35.566 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:35.566 08:15:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:35.566 08:15:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:35.566 08:15:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:35.566 08:15:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:35.566 08:15:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:35.566 08:15:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:35.566 08:15:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:35.566 08:15:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:35.825 08:15:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:35.825 08:15:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:35.825 08:15:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:35.825 08:15:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:35.825 08:15:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:35.825 08:15:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:35.825 08:15:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:35.825 08:15:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:35.825 08:15:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:35.825 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:35.825 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:35.825 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:35.825 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:35.825 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:35.826 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:35.826 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:35.826 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:35.826 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:35.826 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:35.826 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:35.826 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:35.826 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:35.826 ' 00:32:41.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:41.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:41.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:41.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:41.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:41.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:41.103 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:41.103 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:41.103 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:41.103 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:41.103 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:41.103 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:41.103 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:41.103 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:41.103 08:15:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:41.103 08:15:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:41.103 08:15:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:41.103 08:15:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2688354 00:32:41.103 08:15:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2688354 ']' 00:32:41.103 08:15:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2688354 00:32:41.103 08:15:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:41.103 08:15:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:41.103 08:15:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2688354 00:32:41.103 08:15:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:41.103 08:15:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:41.103 08:15:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2688354' 00:32:41.103 killing process with pid 2688354 00:32:41.103 08:15:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2688354 00:32:41.103 08:15:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2688354 00:32:41.362 08:15:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:41.362 08:15:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:41.362 08:15:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2688354 ']' 00:32:41.362 08:15:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2688354 00:32:41.362 08:15:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2688354 ']' 00:32:41.362 08:15:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2688354 00:32:41.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2688354) - No such process 00:32:41.362 08:15:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2688354 is not found' 00:32:41.362 Process with pid 2688354 is not found 00:32:41.362 08:15:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:41.362 08:15:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:41.362 08:15:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:41.362 00:32:41.362 real 0m15.936s 00:32:41.362 user 0m33.344s 00:32:41.362 sys 0m0.657s 00:32:41.362 08:15:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:41.362 08:15:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:41.362 ************************************ 00:32:41.362 END TEST spdkcli_nvmf_tcp 00:32:41.362 ************************************ 00:32:41.362 08:15:35 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:41.362 08:15:35 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:41.362 08:15:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:41.362 08:15:35 -- common/autotest_common.sh@10 -- # set +x 00:32:41.362 ************************************ 00:32:41.362 START TEST nvmf_identify_passthru 00:32:41.362 ************************************ 00:32:41.362 08:15:35 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:41.362 * Looking for test storage... 00:32:41.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:41.362 08:15:35 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:41.362 08:15:35 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:41.362 08:15:35 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:32:41.362 08:15:35 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:41.362 08:15:35 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:41.362 08:15:35 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:41.362 08:15:35 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:41.362 08:15:35 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:41.362 08:15:35 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:41.362 08:15:35 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:41.362 08:15:35 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:41.362 08:15:35 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:41.362 08:15:35 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:41.362 08:15:35 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:41.362 08:15:35 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:41.362 08:15:35 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:41.362 08:15:35 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:41.362 08:15:35 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:41.362 08:15:35 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:41.362 08:15:35 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:41.621 08:15:35 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:41.621 08:15:35 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:41.621 08:15:35 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:41.621 08:15:35 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:41.621 08:15:35 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:41.621 08:15:35 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:41.621 08:15:35 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:41.621 08:15:35 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:41.621 08:15:35 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:41.621 08:15:35 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:41.621 08:15:35 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:41.621 08:15:35 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:41.621 08:15:35 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:41.621 08:15:35 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:41.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.621 --rc genhtml_branch_coverage=1 00:32:41.621 --rc genhtml_function_coverage=1 00:32:41.621 --rc genhtml_legend=1 00:32:41.621 --rc geninfo_all_blocks=1 00:32:41.621 --rc geninfo_unexecuted_blocks=1 00:32:41.621 00:32:41.621 ' 00:32:41.621 08:15:35 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:41.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.621 --rc genhtml_branch_coverage=1 00:32:41.621 --rc genhtml_function_coverage=1 00:32:41.621 --rc genhtml_legend=1 00:32:41.621 --rc geninfo_all_blocks=1 00:32:41.621 --rc geninfo_unexecuted_blocks=1 00:32:41.621 00:32:41.621 ' 00:32:41.621 08:15:35 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:41.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.621 --rc genhtml_branch_coverage=1 00:32:41.621 --rc genhtml_function_coverage=1 00:32:41.621 --rc genhtml_legend=1 00:32:41.621 --rc geninfo_all_blocks=1 00:32:41.621 --rc geninfo_unexecuted_blocks=1 00:32:41.621 00:32:41.621 ' 00:32:41.621 08:15:35 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:41.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.621 --rc genhtml_branch_coverage=1 00:32:41.621 --rc genhtml_function_coverage=1 00:32:41.621 --rc genhtml_legend=1 00:32:41.621 --rc geninfo_all_blocks=1 00:32:41.621 --rc geninfo_unexecuted_blocks=1 00:32:41.621 00:32:41.621 ' 00:32:41.621 08:15:35 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.621 08:15:35 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:41.621 08:15:35 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.621 08:15:35 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.621 08:15:35 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.621 08:15:35 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.621 08:15:35 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.621 08:15:35 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.621 08:15:35 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:41.621 08:15:35 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:41.621 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:41.622 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:41.622 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:41.622 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:41.622 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:41.622 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:41.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:41.622 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:41.622 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:41.622 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:41.622 08:15:35 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.622 08:15:35 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:41.622 08:15:35 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.622 08:15:35 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.622 08:15:35 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.622 08:15:35 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.622 08:15:35 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.622 08:15:35 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.622 08:15:35 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:41.622 08:15:35 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.622 08:15:35 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:41.622 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:41.622 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:41.622 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:41.622 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:41.622 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:41.622 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.622 08:15:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:41.622 08:15:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.622 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:41.622 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:41.622 08:15:35 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:41.622 08:15:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:46.889 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:46.889 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:46.889 Found net devices under 0000:86:00.0: cvl_0_0 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:46.889 Found net devices under 0000:86:00.1: cvl_0_1 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:46.889 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:46.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:32:46.890 00:32:46.890 --- 10.0.0.2 ping statistics --- 00:32:46.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.890 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:32:46.890 00:32:46.890 --- 10.0.0.1 ping statistics --- 00:32:46.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.890 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:46.890 08:15:40 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:46.890 08:15:40 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:46.890 08:15:40 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:46.890 08:15:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:46.890 08:15:40 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:46.890 08:15:40 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:46.890 08:15:40 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:46.890 08:15:40 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:46.890 08:15:40 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:46.890 08:15:40 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:46.890 08:15:40 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:46.890 08:15:40 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:46.890 08:15:40 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:46.890 08:15:40 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:47.149 08:15:41 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:47.149 08:15:41 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:47.149 08:15:41 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:32:47.149 08:15:41 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:47.149 08:15:41 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:47.149 08:15:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:47.149 08:15:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:47.149 08:15:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:51.336 08:15:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:32:51.336 08:15:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:51.336 08:15:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:51.336 08:15:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:55.525 08:15:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:55.526 08:15:49 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.526 08:15:49 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.526 08:15:49 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2695378 00:32:55.526 08:15:49 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:55.526 08:15:49 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:55.526 08:15:49 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2695378 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2695378 ']' 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.526 [2024-11-27 08:15:49.380648] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:32:55.526 [2024-11-27 08:15:49.380696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:55.526 [2024-11-27 08:15:49.447374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:55.526 [2024-11-27 08:15:49.490813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:55.526 [2024-11-27 08:15:49.490854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:55.526 [2024-11-27 08:15:49.490861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:55.526 [2024-11-27 08:15:49.490867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:55.526 [2024-11-27 08:15:49.490872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:55.526 [2024-11-27 08:15:49.492314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.526 [2024-11-27 08:15:49.492410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:55.526 [2024-11-27 08:15:49.492489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:55.526 [2024-11-27 08:15:49.492490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:55.526 08:15:49 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.526 INFO: Log level set to 20 00:32:55.526 INFO: Requests: 00:32:55.526 { 00:32:55.526 "jsonrpc": "2.0", 00:32:55.526 "method": "nvmf_set_config", 00:32:55.526 "id": 1, 00:32:55.526 "params": { 00:32:55.526 "admin_cmd_passthru": { 00:32:55.526 "identify_ctrlr": true 00:32:55.526 } 00:32:55.526 } 00:32:55.526 } 00:32:55.526 00:32:55.526 INFO: response: 00:32:55.526 { 00:32:55.526 "jsonrpc": "2.0", 00:32:55.526 "id": 1, 00:32:55.526 "result": true 00:32:55.526 } 00:32:55.526 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.526 08:15:49 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.526 INFO: Setting log level to 20 00:32:55.526 INFO: Setting log level to 20 00:32:55.526 INFO: Log level set to 20 00:32:55.526 INFO: Log level set to 20 00:32:55.526 INFO: Requests: 00:32:55.526 { 00:32:55.526 "jsonrpc": "2.0", 00:32:55.526 "method": "framework_start_init", 00:32:55.526 "id": 1 00:32:55.526 } 00:32:55.526 00:32:55.526 INFO: Requests: 00:32:55.526 { 00:32:55.526 "jsonrpc": "2.0", 00:32:55.526 "method": "framework_start_init", 00:32:55.526 "id": 1 00:32:55.526 } 00:32:55.526 00:32:55.526 [2024-11-27 08:15:49.610189] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:55.526 INFO: response: 00:32:55.526 { 00:32:55.526 "jsonrpc": "2.0", 00:32:55.526 "id": 1, 00:32:55.526 "result": true 00:32:55.526 } 00:32:55.526 00:32:55.526 INFO: response: 00:32:55.526 { 00:32:55.526 "jsonrpc": "2.0", 00:32:55.526 "id": 1, 00:32:55.526 "result": true 00:32:55.526 } 00:32:55.526 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.526 08:15:49 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.526 INFO: Setting log level to 40 00:32:55.526 INFO: Setting log level to 40 00:32:55.526 INFO: Setting log level to 40 00:32:55.526 [2024-11-27 08:15:49.623536] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.526 08:15:49 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:55.526 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.784 08:15:49 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:55.784 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.784 08:15:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:59.067 Nvme0n1 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.067 08:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.067 08:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.067 08:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:59.067 [2024-11-27 08:15:52.532116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.067 08:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:59.067 [ 00:32:59.067 { 00:32:59.067 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:59.067 "subtype": "Discovery", 00:32:59.067 "listen_addresses": [], 00:32:59.067 "allow_any_host": true, 00:32:59.067 "hosts": [] 00:32:59.067 }, 00:32:59.067 { 00:32:59.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:59.067 "subtype": "NVMe", 00:32:59.067 "listen_addresses": [ 00:32:59.067 { 00:32:59.067 "trtype": "TCP", 00:32:59.067 "adrfam": "IPv4", 00:32:59.067 "traddr": "10.0.0.2", 00:32:59.067 "trsvcid": "4420" 00:32:59.067 } 00:32:59.067 ], 00:32:59.067 "allow_any_host": true, 00:32:59.067 "hosts": [], 00:32:59.067 "serial_number": "SPDK00000000000001", 00:32:59.067 "model_number": "SPDK bdev Controller", 00:32:59.067 "max_namespaces": 1, 00:32:59.067 "min_cntlid": 1, 00:32:59.067 "max_cntlid": 65519, 00:32:59.067 "namespaces": [ 00:32:59.067 { 00:32:59.067 "nsid": 1, 00:32:59.067 "bdev_name": "Nvme0n1", 00:32:59.067 "name": "Nvme0n1", 00:32:59.067 "nguid": "61F9F30000DF4F23AD10099E23A79210", 00:32:59.067 "uuid": "61f9f300-00df-4f23-ad10-099e23a79210" 00:32:59.067 } 00:32:59.067 ] 00:32:59.067 } 00:32:59.067 ] 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.067 08:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:59.067 08:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:59.067 08:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:59.067 08:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:32:59.067 08:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:59.067 08:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:59.067 08:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:59.067 08:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:59.067 08:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:32:59.067 08:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:59.067 08:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.067 08:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:59.067 08:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:59.067 08:15:52 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:59.067 08:15:52 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:59.067 08:15:52 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:59.067 08:15:52 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:59.067 08:15:52 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:59.067 08:15:52 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:59.067 rmmod nvme_tcp 00:32:59.067 rmmod nvme_fabrics 00:32:59.067 rmmod nvme_keyring 00:32:59.067 08:15:52 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:59.067 08:15:52 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:59.067 08:15:52 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:59.067 08:15:52 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2695378 ']' 00:32:59.067 08:15:52 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2695378 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2695378 ']' 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2695378 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:59.067 08:15:52 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2695378 00:32:59.067 08:15:53 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:59.067 08:15:53 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:59.067 08:15:53 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2695378' 00:32:59.067 killing process with pid 2695378 00:32:59.067 08:15:53 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2695378 00:32:59.067 08:15:53 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2695378 00:33:00.442 08:15:54 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:00.442 08:15:54 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:00.442 08:15:54 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:00.443 08:15:54 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:00.443 08:15:54 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:00.443 08:15:54 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:00.443 08:15:54 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:00.443 08:15:54 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:00.443 08:15:54 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:00.443 08:15:54 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.443 08:15:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:00.443 08:15:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.977 08:15:56 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:02.977 00:33:02.977 real 0m21.250s 00:33:02.977 user 0m26.499s 00:33:02.977 sys 0m5.788s 00:33:02.977 08:15:56 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:02.977 08:15:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:02.977 ************************************ 00:33:02.977 END TEST nvmf_identify_passthru 00:33:02.977 ************************************ 00:33:02.977 08:15:56 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:02.977 08:15:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:02.977 08:15:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:02.977 08:15:56 -- common/autotest_common.sh@10 -- # set +x 00:33:02.977 ************************************ 00:33:02.977 START TEST nvmf_dif 00:33:02.977 ************************************ 00:33:02.977 08:15:56 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:02.977 * Looking for test storage... 00:33:02.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:02.977 08:15:56 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:02.977 08:15:56 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:33:02.977 08:15:56 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:02.977 08:15:56 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:02.977 08:15:56 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:02.977 08:15:56 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:02.977 08:15:56 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:02.977 08:15:56 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:02.977 08:15:56 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:02.977 08:15:56 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:02.977 08:15:56 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:02.977 08:15:56 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:02.977 08:15:56 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:02.977 08:15:56 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:02.978 08:15:56 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:02.978 08:15:56 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:02.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.978 --rc genhtml_branch_coverage=1 00:33:02.978 --rc genhtml_function_coverage=1 00:33:02.978 --rc genhtml_legend=1 00:33:02.978 --rc geninfo_all_blocks=1 00:33:02.978 --rc geninfo_unexecuted_blocks=1 00:33:02.978 00:33:02.978 ' 00:33:02.978 08:15:56 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:02.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.978 --rc genhtml_branch_coverage=1 00:33:02.978 --rc genhtml_function_coverage=1 00:33:02.978 --rc genhtml_legend=1 00:33:02.978 --rc geninfo_all_blocks=1 00:33:02.978 --rc geninfo_unexecuted_blocks=1 00:33:02.978 00:33:02.978 ' 00:33:02.978 08:15:56 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:02.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.978 --rc genhtml_branch_coverage=1 00:33:02.978 --rc genhtml_function_coverage=1 00:33:02.978 --rc genhtml_legend=1 00:33:02.978 --rc geninfo_all_blocks=1 00:33:02.978 --rc geninfo_unexecuted_blocks=1 00:33:02.978 00:33:02.978 ' 00:33:02.978 08:15:56 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:02.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.978 --rc genhtml_branch_coverage=1 00:33:02.978 --rc genhtml_function_coverage=1 00:33:02.978 --rc genhtml_legend=1 00:33:02.978 --rc geninfo_all_blocks=1 00:33:02.978 --rc geninfo_unexecuted_blocks=1 00:33:02.978 00:33:02.978 ' 00:33:02.978 08:15:56 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.978 08:15:56 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.978 08:15:56 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.978 08:15:56 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.978 08:15:56 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.978 08:15:56 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:02.978 08:15:56 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:02.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:02.978 08:15:56 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:02.978 08:15:56 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:02.978 08:15:56 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:02.978 08:15:56 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:02.978 08:15:56 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.978 08:15:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:02.978 08:15:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:02.978 08:15:56 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:02.978 08:15:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:08.242 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:08.242 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:08.242 Found net devices under 0000:86:00.0: cvl_0_0 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.242 08:16:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:08.243 Found net devices under 0000:86:00.1: cvl_0_1 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:08.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:08.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:33:08.243 00:33:08.243 --- 10.0.0.2 ping statistics --- 00:33:08.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.243 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:08.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:08.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:33:08.243 00:33:08.243 --- 10.0.0.1 ping statistics --- 00:33:08.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.243 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:08.243 08:16:01 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:10.776 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:10.776 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:10.776 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:10.776 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:10.776 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:10.776 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:10.776 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:10.776 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:10.776 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:10.776 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:10.776 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:10.776 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:10.776 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:10.776 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:10.776 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:10.776 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:10.776 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:10.776 08:16:04 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.776 08:16:04 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:10.776 08:16:04 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:10.776 08:16:04 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.776 08:16:04 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:10.776 08:16:04 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:10.776 08:16:04 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:10.776 08:16:04 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:10.776 08:16:04 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:10.776 08:16:04 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:10.776 08:16:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.776 08:16:04 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2700626 00:33:10.776 08:16:04 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2700626 00:33:10.776 08:16:04 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:10.776 08:16:04 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2700626 ']' 00:33:10.776 08:16:04 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.776 08:16:04 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.776 08:16:04 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.776 08:16:04 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.776 08:16:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.776 [2024-11-27 08:16:04.632007] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:33:10.776 [2024-11-27 08:16:04.632054] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:10.776 [2024-11-27 08:16:04.698874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.776 [2024-11-27 08:16:04.740647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.776 [2024-11-27 08:16:04.740685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.776 [2024-11-27 08:16:04.740693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.776 [2024-11-27 08:16:04.740699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.776 [2024-11-27 08:16:04.740705] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.776 [2024-11-27 08:16:04.741280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:10.776 08:16:04 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:10.776 08:16:04 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:10.776 08:16:04 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:10.776 08:16:04 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:10.776 08:16:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.776 08:16:04 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:10.776 08:16:04 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:10.776 08:16:04 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:10.776 08:16:04 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.776 08:16:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.776 [2024-11-27 08:16:04.873328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:10.776 08:16:04 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.776 08:16:04 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:10.776 08:16:04 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:10.776 08:16:04 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:10.777 08:16:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:11.035 ************************************ 00:33:11.035 START TEST fio_dif_1_default 00:33:11.035 ************************************ 00:33:11.035 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:11.035 08:16:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:11.035 08:16:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:11.035 08:16:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:11.035 08:16:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:11.035 08:16:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:11.036 bdev_null0 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:11.036 [2024-11-27 08:16:04.937629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:11.036 { 00:33:11.036 "params": { 00:33:11.036 "name": "Nvme$subsystem", 00:33:11.036 "trtype": "$TEST_TRANSPORT", 00:33:11.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:11.036 "adrfam": "ipv4", 00:33:11.036 "trsvcid": "$NVMF_PORT", 00:33:11.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:11.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:11.036 "hdgst": ${hdgst:-false}, 00:33:11.036 "ddgst": ${ddgst:-false} 00:33:11.036 }, 00:33:11.036 "method": "bdev_nvme_attach_controller" 00:33:11.036 } 00:33:11.036 EOF 00:33:11.036 )") 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:11.036 "params": { 00:33:11.036 "name": "Nvme0", 00:33:11.036 "trtype": "tcp", 00:33:11.036 "traddr": "10.0.0.2", 00:33:11.036 "adrfam": "ipv4", 00:33:11.036 "trsvcid": "4420", 00:33:11.036 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:11.036 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:11.036 "hdgst": false, 00:33:11.036 "ddgst": false 00:33:11.036 }, 00:33:11.036 "method": "bdev_nvme_attach_controller" 00:33:11.036 }' 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:11.036 08:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:11.036 08:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:11.036 08:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:11.036 08:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:11.036 08:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:11.294 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:11.294 fio-3.35 00:33:11.294 Starting 1 thread 00:33:23.507 00:33:23.507 filename0: (groupid=0, jobs=1): err= 0: pid=2700994: Wed Nov 27 08:16:15 2024 00:33:23.507 read: IOPS=95, BW=381KiB/s (391kB/s)(3824KiB/10024msec) 00:33:23.507 slat (nsec): min=6122, max=32693, avg=6481.29, stdev=1102.73 00:33:23.507 clat (usec): min=40905, max=43851, avg=41922.83, stdev=282.87 00:33:23.507 lat (usec): min=40911, max=43883, avg=41929.31, stdev=283.12 00:33:23.507 clat percentiles (usec): 00:33:23.507 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[42206], 00:33:23.507 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:33:23.507 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:23.507 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:33:23.507 | 99.99th=[43779] 00:33:23.507 bw ( KiB/s): min= 352, max= 384, per=99.61%, avg=380.80, stdev= 9.85, samples=20 00:33:23.507 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:33:23.507 lat (msec) : 50=100.00% 00:33:23.507 cpu : usr=92.59%, sys=7.16%, ctx=18, majf=0, minf=0 00:33:23.507 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:23.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.507 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:23.507 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:23.507 00:33:23.507 Run status group 0 (all jobs): 00:33:23.507 READ: bw=381KiB/s (391kB/s), 381KiB/s-381KiB/s (391kB/s-391kB/s), io=3824KiB (3916kB), run=10024-10024msec 00:33:23.507 08:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:23.507 08:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:23.507 08:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:23.507 08:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.508 00:33:23.508 real 0m11.194s 00:33:23.508 user 0m15.743s 00:33:23.508 sys 0m0.982s 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:23.508 ************************************ 00:33:23.508 END TEST fio_dif_1_default 00:33:23.508 ************************************ 00:33:23.508 08:16:16 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:23.508 08:16:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:23.508 08:16:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:23.508 08:16:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:23.508 ************************************ 00:33:23.508 START TEST fio_dif_1_multi_subsystems 00:33:23.508 ************************************ 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.508 bdev_null0 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.508 [2024-11-27 08:16:16.204874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.508 bdev_null1 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:23.508 { 00:33:23.508 "params": { 00:33:23.508 "name": "Nvme$subsystem", 00:33:23.508 "trtype": "$TEST_TRANSPORT", 00:33:23.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.508 "adrfam": "ipv4", 00:33:23.508 "trsvcid": "$NVMF_PORT", 00:33:23.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.508 "hdgst": ${hdgst:-false}, 00:33:23.508 "ddgst": ${ddgst:-false} 00:33:23.508 }, 00:33:23.508 "method": "bdev_nvme_attach_controller" 00:33:23.508 } 00:33:23.508 EOF 00:33:23.508 )") 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:23.508 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:23.508 { 00:33:23.508 "params": { 00:33:23.508 "name": "Nvme$subsystem", 00:33:23.508 "trtype": "$TEST_TRANSPORT", 00:33:23.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.508 "adrfam": "ipv4", 00:33:23.508 "trsvcid": "$NVMF_PORT", 00:33:23.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.508 "hdgst": ${hdgst:-false}, 00:33:23.508 "ddgst": ${ddgst:-false} 00:33:23.509 }, 00:33:23.509 "method": "bdev_nvme_attach_controller" 00:33:23.509 } 00:33:23.509 EOF 00:33:23.509 )") 00:33:23.509 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:23.509 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:23.509 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:23.509 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:23.509 "params": { 00:33:23.509 "name": "Nvme0", 00:33:23.509 "trtype": "tcp", 00:33:23.509 "traddr": "10.0.0.2", 00:33:23.509 "adrfam": "ipv4", 00:33:23.509 "trsvcid": "4420", 00:33:23.509 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:23.509 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:23.509 "hdgst": false, 00:33:23.509 "ddgst": false 00:33:23.509 }, 00:33:23.509 "method": "bdev_nvme_attach_controller" 00:33:23.509 },{ 00:33:23.509 "params": { 00:33:23.509 "name": "Nvme1", 00:33:23.509 "trtype": "tcp", 00:33:23.509 "traddr": "10.0.0.2", 00:33:23.509 "adrfam": "ipv4", 00:33:23.509 "trsvcid": "4420", 00:33:23.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:23.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:23.509 "hdgst": false, 00:33:23.509 "ddgst": false 00:33:23.509 }, 00:33:23.509 "method": "bdev_nvme_attach_controller" 00:33:23.509 }' 00:33:23.509 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:23.509 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:23.509 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.509 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.509 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:23.509 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:23.509 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:23.509 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:23.509 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:23.509 08:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.509 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:23.509 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:23.509 fio-3.35 00:33:23.509 Starting 2 threads 00:33:33.479 00:33:33.479 filename0: (groupid=0, jobs=1): err= 0: pid=2702961: Wed Nov 27 08:16:27 2024 00:33:33.479 read: IOPS=95, BW=384KiB/s (393kB/s)(3840KiB/10004msec) 00:33:33.479 slat (nsec): min=6108, max=44221, avg=12041.75, stdev=9440.51 00:33:33.479 clat (usec): min=40833, max=42450, avg=41641.78, stdev=461.75 00:33:33.479 lat (usec): min=40840, max=42483, avg=41653.82, stdev=462.22 00:33:33.479 clat percentiles (usec): 00:33:33.479 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:33.479 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:33:33.479 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:33.479 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:33.479 | 99.99th=[42206] 00:33:33.479 bw ( KiB/s): min= 352, max= 416, per=34.73%, avg=383.95, stdev=15.09, samples=19 00:33:33.479 iops : min= 88, max= 104, avg=95.95, stdev= 3.78, samples=19 00:33:33.479 lat (msec) : 50=100.00% 00:33:33.479 cpu : usr=98.95%, sys=0.75%, ctx=35, majf=0, minf=168 00:33:33.479 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:33.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.479 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:33.479 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:33.479 filename1: (groupid=0, jobs=1): err= 0: pid=2702962: Wed Nov 27 08:16:27 2024 00:33:33.479 read: IOPS=179, BW=719KiB/s (736kB/s)(7200KiB/10011msec) 00:33:33.479 slat (nsec): min=6067, max=43093, avg=8969.60, stdev=5893.77 00:33:33.479 clat (usec): min=417, max=42615, avg=22218.41, stdev=20542.38 00:33:33.479 lat (usec): min=423, max=42622, avg=22227.38, stdev=20540.40 00:33:33.479 clat percentiles (usec): 00:33:33.479 | 1.00th=[ 433], 5.00th=[ 441], 10.00th=[ 445], 20.00th=[ 453], 00:33:33.479 | 30.00th=[ 461], 40.00th=[ 478], 50.00th=[41157], 60.00th=[41681], 00:33:33.479 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:33:33.479 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:33.479 | 99.99th=[42730] 00:33:33.479 bw ( KiB/s): min= 351, max= 768, per=65.11%, avg=718.35, stdev=121.89, samples=20 00:33:33.479 iops : min= 87, max= 192, avg=179.55, stdev=30.59, samples=20 00:33:33.479 lat (usec) : 500=45.67%, 750=1.44% 00:33:33.479 lat (msec) : 50=52.89% 00:33:33.479 cpu : usr=97.39%, sys=2.34%, ctx=10, majf=0, minf=35 00:33:33.479 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:33.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.479 issued rwts: total=1800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:33.479 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:33.479 00:33:33.479 Run status group 0 (all jobs): 00:33:33.479 READ: bw=1103KiB/s (1129kB/s), 384KiB/s-719KiB/s (393kB/s-736kB/s), io=10.8MiB (11.3MB), run=10004-10011msec 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.739 00:33:33.739 real 0m11.463s 00:33:33.739 user 0m26.588s 00:33:33.739 sys 0m0.618s 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:33.739 08:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.739 ************************************ 00:33:33.739 END TEST fio_dif_1_multi_subsystems 00:33:33.739 ************************************ 00:33:33.739 08:16:27 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:33.739 08:16:27 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:33.739 08:16:27 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:33.739 08:16:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:33.739 ************************************ 00:33:33.739 START TEST fio_dif_rand_params 00:33:33.739 ************************************ 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:33.739 bdev_null0 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:33.739 [2024-11-27 08:16:27.740099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:33.739 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:33.739 { 00:33:33.739 "params": { 00:33:33.739 "name": "Nvme$subsystem", 00:33:33.739 "trtype": "$TEST_TRANSPORT", 00:33:33.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:33.739 "adrfam": "ipv4", 00:33:33.739 "trsvcid": "$NVMF_PORT", 00:33:33.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:33.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:33.740 "hdgst": ${hdgst:-false}, 00:33:33.740 "ddgst": ${ddgst:-false} 00:33:33.740 }, 00:33:33.740 "method": "bdev_nvme_attach_controller" 00:33:33.740 } 00:33:33.740 EOF 00:33:33.740 )") 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:33.740 "params": { 00:33:33.740 "name": "Nvme0", 00:33:33.740 "trtype": "tcp", 00:33:33.740 "traddr": "10.0.0.2", 00:33:33.740 "adrfam": "ipv4", 00:33:33.740 "trsvcid": "4420", 00:33:33.740 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:33.740 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:33.740 "hdgst": false, 00:33:33.740 "ddgst": false 00:33:33.740 }, 00:33:33.740 "method": "bdev_nvme_attach_controller" 00:33:33.740 }' 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:33.740 08:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:33.997 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:33.997 ... 00:33:33.997 fio-3.35 00:33:33.997 Starting 3 threads 00:33:40.565 00:33:40.565 filename0: (groupid=0, jobs=1): err= 0: pid=2704921: Wed Nov 27 08:16:33 2024 00:33:40.565 read: IOPS=272, BW=34.1MiB/s (35.7MB/s)(171MiB/5006msec) 00:33:40.565 slat (nsec): min=6157, max=33264, avg=10968.57, stdev=2373.55 00:33:40.565 clat (usec): min=3378, max=89114, avg=10993.66, stdev=8780.72 00:33:40.565 lat (usec): min=3385, max=89126, avg=11004.63, stdev=8780.55 00:33:40.565 clat percentiles (usec): 00:33:40.565 | 1.00th=[ 4047], 5.00th=[ 6063], 10.00th=[ 6849], 20.00th=[ 7963], 00:33:40.565 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:33:40.565 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11469], 95.00th=[12780], 00:33:40.565 | 99.00th=[49546], 99.50th=[50070], 99.90th=[88605], 99.95th=[88605], 00:33:40.565 | 99.99th=[88605] 00:33:40.565 bw ( KiB/s): min=23040, max=43264, per=31.27%, avg=34867.20, stdev=6463.74, samples=10 00:33:40.565 iops : min= 180, max= 338, avg=272.40, stdev=50.50, samples=10 00:33:40.565 lat (msec) : 4=0.81%, 10=61.73%, 20=33.06%, 50=3.74%, 100=0.66% 00:33:40.565 cpu : usr=94.37%, sys=5.31%, ctx=18, majf=0, minf=117 00:33:40.565 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.565 issued rwts: total=1364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.565 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:40.565 filename0: (groupid=0, jobs=1): err= 0: pid=2704922: Wed Nov 27 08:16:33 2024 00:33:40.565 read: IOPS=306, BW=38.3MiB/s (40.1MB/s)(192MiB/5004msec) 00:33:40.565 slat (nsec): min=6151, max=36541, avg=11221.90, stdev=2131.59 00:33:40.565 clat (usec): min=3968, max=54376, avg=9784.27, stdev=5470.11 00:33:40.565 lat (usec): min=3978, max=54383, avg=9795.49, stdev=5469.95 00:33:40.565 clat percentiles (usec): 00:33:40.565 | 1.00th=[ 4424], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 7046], 00:33:40.565 | 30.00th=[ 8225], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9896], 00:33:40.565 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11469], 95.00th=[12125], 00:33:40.565 | 99.00th=[46924], 99.50th=[49546], 99.90th=[53740], 99.95th=[54264], 00:33:40.565 | 99.99th=[54264] 00:33:40.565 bw ( KiB/s): min=27648, max=46336, per=35.13%, avg=39168.00, stdev=5049.83, samples=10 00:33:40.565 iops : min= 216, max= 362, avg=306.00, stdev=39.45, samples=10 00:33:40.565 lat (msec) : 4=0.07%, 10=62.34%, 20=35.84%, 50=1.31%, 100=0.46% 00:33:40.565 cpu : usr=94.08%, sys=5.60%, ctx=10, majf=0, minf=61 00:33:40.565 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.565 issued rwts: total=1532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.565 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:40.565 filename0: (groupid=0, jobs=1): err= 0: pid=2704923: Wed Nov 27 08:16:33 2024 00:33:40.565 read: IOPS=296, BW=37.1MiB/s (38.9MB/s)(187MiB/5044msec) 00:33:40.565 slat (nsec): min=6172, max=28165, avg=11287.23, stdev=2321.69 00:33:40.565 clat (usec): min=3418, max=51805, avg=10058.47, stdev=6453.79 00:33:40.565 lat (usec): min=3425, max=51816, avg=10069.75, stdev=6453.72 00:33:40.565 clat percentiles (usec): 00:33:40.565 | 1.00th=[ 3982], 5.00th=[ 5669], 10.00th=[ 6521], 20.00th=[ 7504], 00:33:40.565 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9765], 00:33:40.565 | 70.00th=[10159], 80.00th=[10814], 90.00th=[11731], 95.00th=[12518], 00:33:40.565 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:33:40.565 | 99.99th=[51643] 00:33:40.565 bw ( KiB/s): min=32512, max=47360, per=34.35%, avg=38297.60, stdev=4575.01, samples=10 00:33:40.565 iops : min= 254, max= 370, avg=299.20, stdev=35.74, samples=10 00:33:40.565 lat (msec) : 4=1.07%, 10=64.95%, 20=31.44%, 50=2.00%, 100=0.53% 00:33:40.565 cpu : usr=93.83%, sys=5.83%, ctx=11, majf=0, minf=72 00:33:40.565 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.565 issued rwts: total=1498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.565 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:40.565 00:33:40.565 Run status group 0 (all jobs): 00:33:40.565 READ: bw=109MiB/s (114MB/s), 34.1MiB/s-38.3MiB/s (35.7MB/s-40.1MB/s), io=549MiB (576MB), run=5004-5044msec 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.565 bdev_null0 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.565 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.566 [2024-11-27 08:16:33.879650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.566 bdev_null1 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.566 bdev_null2 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:40.566 { 00:33:40.566 "params": { 00:33:40.566 "name": "Nvme$subsystem", 00:33:40.566 "trtype": "$TEST_TRANSPORT", 00:33:40.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.566 "adrfam": "ipv4", 00:33:40.566 "trsvcid": "$NVMF_PORT", 00:33:40.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.566 "hdgst": ${hdgst:-false}, 00:33:40.566 "ddgst": ${ddgst:-false} 00:33:40.566 }, 00:33:40.566 "method": "bdev_nvme_attach_controller" 00:33:40.566 } 00:33:40.566 EOF 00:33:40.566 )") 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:40.566 08:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:40.567 { 00:33:40.567 "params": { 00:33:40.567 "name": "Nvme$subsystem", 00:33:40.567 "trtype": "$TEST_TRANSPORT", 00:33:40.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.567 "adrfam": "ipv4", 00:33:40.567 "trsvcid": "$NVMF_PORT", 00:33:40.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.567 "hdgst": ${hdgst:-false}, 00:33:40.567 "ddgst": ${ddgst:-false} 00:33:40.567 }, 00:33:40.567 "method": "bdev_nvme_attach_controller" 00:33:40.567 } 00:33:40.567 EOF 00:33:40.567 )") 00:33:40.567 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:40.567 08:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:40.567 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:40.567 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:40.567 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:40.567 08:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:40.567 08:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:40.567 08:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:40.567 { 00:33:40.567 "params": { 00:33:40.567 "name": "Nvme$subsystem", 00:33:40.567 "trtype": "$TEST_TRANSPORT", 00:33:40.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.567 "adrfam": "ipv4", 00:33:40.567 "trsvcid": "$NVMF_PORT", 00:33:40.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.567 "hdgst": ${hdgst:-false}, 00:33:40.567 "ddgst": ${ddgst:-false} 00:33:40.567 }, 00:33:40.567 "method": "bdev_nvme_attach_controller" 00:33:40.567 } 00:33:40.567 EOF 00:33:40.567 )") 00:33:40.567 08:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:40.567 08:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:40.567 08:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:40.567 08:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:40.567 "params": { 00:33:40.567 "name": "Nvme0", 00:33:40.567 "trtype": "tcp", 00:33:40.567 "traddr": "10.0.0.2", 00:33:40.567 "adrfam": "ipv4", 00:33:40.567 "trsvcid": "4420", 00:33:40.567 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:40.567 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:40.567 "hdgst": false, 00:33:40.567 "ddgst": false 00:33:40.567 }, 00:33:40.567 "method": "bdev_nvme_attach_controller" 00:33:40.567 },{ 00:33:40.567 "params": { 00:33:40.567 "name": "Nvme1", 00:33:40.567 "trtype": "tcp", 00:33:40.567 "traddr": "10.0.0.2", 00:33:40.567 "adrfam": "ipv4", 00:33:40.567 "trsvcid": "4420", 00:33:40.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:40.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:40.567 "hdgst": false, 00:33:40.567 "ddgst": false 00:33:40.567 }, 00:33:40.567 "method": "bdev_nvme_attach_controller" 00:33:40.567 },{ 00:33:40.567 "params": { 00:33:40.567 "name": "Nvme2", 00:33:40.567 "trtype": "tcp", 00:33:40.567 "traddr": "10.0.0.2", 00:33:40.567 "adrfam": "ipv4", 00:33:40.567 "trsvcid": "4420", 00:33:40.567 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:40.567 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:40.567 "hdgst": false, 00:33:40.567 "ddgst": false 00:33:40.567 }, 00:33:40.567 "method": "bdev_nvme_attach_controller" 00:33:40.567 }' 00:33:40.567 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:40.567 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:40.567 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:40.567 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:40.567 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:40.567 08:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:40.567 08:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:40.567 08:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:40.567 08:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:40.567 08:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.567 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:40.567 ... 00:33:40.567 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:40.567 ... 00:33:40.567 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:40.567 ... 00:33:40.567 fio-3.35 00:33:40.567 Starting 24 threads 00:33:52.810 00:33:52.810 filename0: (groupid=0, jobs=1): err= 0: pid=2706080: Wed Nov 27 08:16:45 2024 00:33:52.810 read: IOPS=562, BW=2251KiB/s (2305kB/s)(22.0MiB/10017msec) 00:33:52.810 slat (nsec): min=3308, max=94278, avg=35951.45, stdev=21293.38 00:33:52.810 clat (usec): min=18206, max=50964, avg=28178.40, stdev=1402.85 00:33:52.810 lat (usec): min=18214, max=50976, avg=28214.35, stdev=1400.87 00:33:52.810 clat percentiles (usec): 00:33:52.810 | 1.00th=[21627], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:33:52.810 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:52.810 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:33:52.810 | 99.00th=[28967], 99.50th=[30802], 99.90th=[45351], 99.95th=[45876], 00:33:52.810 | 99.99th=[51119] 00:33:52.810 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2246.50, stdev=62.08, samples=20 00:33:52.810 iops : min= 544, max= 576, avg=561.60, stdev=15.52, samples=20 00:33:52.810 lat (msec) : 20=0.57%, 50=99.40%, 100=0.04% 00:33:52.810 cpu : usr=98.58%, sys=1.02%, ctx=14, majf=0, minf=9 00:33:52.810 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:52.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.810 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.810 issued rwts: total=5638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.810 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.810 filename0: (groupid=0, jobs=1): err= 0: pid=2706081: Wed Nov 27 08:16:45 2024 00:33:52.810 read: IOPS=562, BW=2251KiB/s (2305kB/s)(22.0MiB/10010msec) 00:33:52.810 slat (nsec): min=5743, max=44292, avg=20782.30, stdev=6016.42 00:33:52.810 clat (usec): min=15086, max=33869, avg=28250.18, stdev=743.87 00:33:52.810 lat (usec): min=15100, max=33885, avg=28270.97, stdev=744.36 00:33:52.810 clat percentiles (usec): 00:33:52.810 | 1.00th=[27657], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:33:52.810 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:33:52.810 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:33:52.810 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:33:52.810 | 99.99th=[33817] 00:33:52.810 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2243.37, stdev=65.66, samples=19 00:33:52.810 iops : min= 544, max= 576, avg=560.84, stdev=16.42, samples=19 00:33:52.810 lat (msec) : 20=0.28%, 50=99.72% 00:33:52.810 cpu : usr=98.57%, sys=1.05%, ctx=11, majf=0, minf=9 00:33:52.810 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.810 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.810 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.810 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.810 filename0: (groupid=0, jobs=1): err= 0: pid=2706082: Wed Nov 27 08:16:45 2024 00:33:52.810 read: IOPS=563, BW=2254KiB/s (2308kB/s)(22.1MiB/10022msec) 00:33:52.810 slat (nsec): min=7154, max=44995, avg=10241.51, stdev=3175.41 00:33:52.810 clat (usec): min=11726, max=36273, avg=28295.99, stdev=1033.06 00:33:52.810 lat (usec): min=11740, max=36282, avg=28306.23, stdev=1032.54 00:33:52.810 clat percentiles (usec): 00:33:52.810 | 1.00th=[25560], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:33:52.810 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:52.810 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:33:52.810 | 99.00th=[28967], 99.50th=[29492], 99.90th=[29492], 99.95th=[29492], 00:33:52.810 | 99.99th=[36439] 00:33:52.810 bw ( KiB/s): min= 2176, max= 2308, per=4.17%, avg=2253.00, stdev=64.51, samples=20 00:33:52.810 iops : min= 544, max= 577, avg=563.25, stdev=16.13, samples=20 00:33:52.810 lat (msec) : 20=0.28%, 50=99.72% 00:33:52.810 cpu : usr=98.33%, sys=1.29%, ctx=14, majf=0, minf=9 00:33:52.810 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.810 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.810 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.810 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.811 filename0: (groupid=0, jobs=1): err= 0: pid=2706083: Wed Nov 27 08:16:45 2024 00:33:52.811 read: IOPS=562, BW=2251KiB/s (2305kB/s)(22.0MiB/10003msec) 00:33:52.811 slat (nsec): min=6253, max=92675, avg=36322.15, stdev=23754.43 00:33:52.811 clat (usec): min=8467, max=66061, avg=28133.59, stdev=2032.69 00:33:52.811 lat (usec): min=8474, max=66081, avg=28169.92, stdev=2033.08 00:33:52.811 clat percentiles (usec): 00:33:52.811 | 1.00th=[22152], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:33:52.811 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:52.811 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:33:52.811 | 99.00th=[29492], 99.50th=[35914], 99.90th=[51643], 99.95th=[51643], 00:33:52.811 | 99.99th=[66323] 00:33:52.811 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2241.68, stdev=67.65, samples=19 00:33:52.811 iops : min= 512, max= 576, avg=560.42, stdev=16.91, samples=19 00:33:52.811 lat (msec) : 10=0.11%, 20=0.75%, 50=98.86%, 100=0.28% 00:33:52.811 cpu : usr=98.80%, sys=0.82%, ctx=12, majf=0, minf=9 00:33:52.811 IO depths : 1=0.1%, 2=5.6%, 4=22.3%, 8=59.0%, 16=13.1%, 32=0.0%, >=64=0.0% 00:33:52.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.811 complete : 0=0.0%, 4=93.8%, 8=1.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.811 issued rwts: total=5628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.811 filename0: (groupid=0, jobs=1): err= 0: pid=2706084: Wed Nov 27 08:16:45 2024 00:33:52.811 read: IOPS=562, BW=2250KiB/s (2304kB/s)(22.0MiB/10011msec) 00:33:52.811 slat (nsec): min=7191, max=35306, avg=16196.20, stdev=4247.90 00:33:52.811 clat (usec): min=12650, max=39164, avg=28292.41, stdev=1093.08 00:33:52.811 lat (usec): min=12662, max=39178, avg=28308.61, stdev=1093.29 00:33:52.811 clat percentiles (usec): 00:33:52.811 | 1.00th=[28181], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:33:52.811 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:33:52.811 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:33:52.811 | 99.00th=[29230], 99.50th=[29230], 99.90th=[39060], 99.95th=[39060], 00:33:52.811 | 99.99th=[39060] 00:33:52.811 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2243.37, stdev=65.66, samples=19 00:33:52.811 iops : min= 544, max= 576, avg=560.84, stdev=16.42, samples=19 00:33:52.811 lat (msec) : 20=0.28%, 50=99.72% 00:33:52.811 cpu : usr=98.63%, sys=1.00%, ctx=15, majf=0, minf=9 00:33:52.811 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.811 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.811 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.811 filename0: (groupid=0, jobs=1): err= 0: pid=2706086: Wed Nov 27 08:16:45 2024 00:33:52.811 read: IOPS=561, BW=2246KiB/s (2300kB/s)(21.9MiB/10002msec) 00:33:52.811 slat (nsec): min=8144, max=92973, avg=41019.95, stdev=21690.44 00:33:52.811 clat (usec): min=14529, max=51829, avg=28133.92, stdev=1483.74 00:33:52.811 lat (usec): min=14548, max=51847, avg=28174.94, stdev=1482.42 00:33:52.811 clat percentiles (usec): 00:33:52.811 | 1.00th=[27657], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:52.811 | 30.00th=[27919], 40.00th=[27919], 50.00th=[28181], 60.00th=[28181], 00:33:52.811 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:33:52.811 | 99.00th=[28967], 99.50th=[29230], 99.90th=[51643], 99.95th=[51643], 00:33:52.811 | 99.99th=[51643] 00:33:52.811 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2236.63, stdev=78.31, samples=19 00:33:52.811 iops : min= 512, max= 576, avg=559.16, stdev=19.58, samples=19 00:33:52.811 lat (msec) : 20=0.28%, 50=99.43%, 100=0.28% 00:33:52.811 cpu : usr=98.65%, sys=0.99%, ctx=12, majf=0, minf=9 00:33:52.811 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.811 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.811 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.811 filename0: (groupid=0, jobs=1): err= 0: pid=2706087: Wed Nov 27 08:16:45 2024 00:33:52.811 read: IOPS=561, BW=2246KiB/s (2300kB/s)(21.9MiB/10002msec) 00:33:52.811 slat (nsec): min=8598, max=91029, avg=41145.76, stdev=21452.77 00:33:52.811 clat (usec): min=14343, max=51722, avg=28174.58, stdev=1482.06 00:33:52.811 lat (usec): min=14363, max=51740, avg=28215.73, stdev=1479.51 00:33:52.811 clat percentiles (usec): 00:33:52.811 | 1.00th=[27657], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:52.811 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:52.811 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:33:52.811 | 99.00th=[28967], 99.50th=[29230], 99.90th=[51643], 99.95th=[51643], 00:33:52.811 | 99.99th=[51643] 00:33:52.811 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2236.63, stdev=78.31, samples=19 00:33:52.811 iops : min= 512, max= 576, avg=559.16, stdev=19.58, samples=19 00:33:52.811 lat (msec) : 20=0.28%, 50=99.43%, 100=0.28% 00:33:52.811 cpu : usr=98.73%, sys=0.89%, ctx=13, majf=0, minf=9 00:33:52.811 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.811 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.811 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.811 filename0: (groupid=0, jobs=1): err= 0: pid=2706088: Wed Nov 27 08:16:45 2024 00:33:52.811 read: IOPS=563, BW=2254KiB/s (2308kB/s)(22.1MiB/10022msec) 00:33:52.811 slat (nsec): min=7034, max=86582, avg=17500.74, stdev=7426.89 00:33:52.811 clat (usec): min=11758, max=36447, avg=28243.97, stdev=1032.31 00:33:52.811 lat (usec): min=11772, max=36471, avg=28261.47, stdev=1031.80 00:33:52.811 clat percentiles (usec): 00:33:52.811 | 1.00th=[25560], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:33:52.811 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:33:52.811 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:33:52.811 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:33:52.811 | 99.99th=[36439] 00:33:52.811 bw ( KiB/s): min= 2176, max= 2308, per=4.17%, avg=2253.00, stdev=64.51, samples=20 00:33:52.811 iops : min= 544, max= 577, avg=563.25, stdev=16.13, samples=20 00:33:52.811 lat (msec) : 20=0.28%, 50=99.72% 00:33:52.811 cpu : usr=98.37%, sys=1.25%, ctx=14, majf=0, minf=9 00:33:52.811 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.811 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.811 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.811 filename1: (groupid=0, jobs=1): err= 0: pid=2706089: Wed Nov 27 08:16:45 2024 00:33:52.811 read: IOPS=565, BW=2261KiB/s (2315kB/s)(22.1MiB/10022msec) 00:33:52.811 slat (nsec): min=7116, max=85016, avg=20108.96, stdev=8211.24 00:33:52.811 clat (usec): min=10635, max=29469, avg=28146.22, stdev=1520.02 00:33:52.811 lat (usec): min=10646, max=29481, avg=28166.33, stdev=1519.49 00:33:52.811 clat percentiles (usec): 00:33:52.811 | 1.00th=[20317], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:33:52.811 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:33:52.811 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:33:52.811 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:33:52.811 | 99.99th=[29492] 00:33:52.811 bw ( KiB/s): min= 2176, max= 2432, per=4.19%, avg=2259.20, stdev=75.15, samples=20 00:33:52.811 iops : min= 544, max= 608, avg=564.80, stdev=18.79, samples=20 00:33:52.811 lat (msec) : 20=0.88%, 50=99.12% 00:33:52.811 cpu : usr=98.54%, sys=1.09%, ctx=13, majf=0, minf=9 00:33:52.811 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.811 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.812 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.812 filename1: (groupid=0, jobs=1): err= 0: pid=2706090: Wed Nov 27 08:16:45 2024 00:33:52.812 read: IOPS=562, BW=2249KiB/s (2303kB/s)(22.0MiB/10018msec) 00:33:52.812 slat (nsec): min=9727, max=91092, avg=30869.76, stdev=19339.52 00:33:52.812 clat (usec): min=13258, max=40860, avg=28242.27, stdev=890.57 00:33:52.812 lat (usec): min=13275, max=40885, avg=28273.14, stdev=887.62 00:33:52.812 clat percentiles (usec): 00:33:52.812 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:33:52.812 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:33:52.812 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:33:52.812 | 99.00th=[28967], 99.50th=[29492], 99.90th=[40633], 99.95th=[40633], 00:33:52.812 | 99.99th=[40633] 00:33:52.812 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2243.90, stdev=63.96, samples=20 00:33:52.812 iops : min= 544, max= 576, avg=560.95, stdev=15.99, samples=20 00:33:52.812 lat (msec) : 20=0.16%, 50=99.84% 00:33:52.812 cpu : usr=98.38%, sys=1.23%, ctx=17, majf=0, minf=9 00:33:52.812 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.812 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.812 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.812 filename1: (groupid=0, jobs=1): err= 0: pid=2706091: Wed Nov 27 08:16:45 2024 00:33:52.812 read: IOPS=563, BW=2255KiB/s (2309kB/s)(22.1MiB/10018msec) 00:33:52.812 slat (nsec): min=4192, max=39763, avg=10546.99, stdev=3983.71 00:33:52.812 clat (usec): min=9623, max=34754, avg=28280.30, stdev=1394.58 00:33:52.812 lat (usec): min=9630, max=34765, avg=28290.85, stdev=1394.43 00:33:52.812 clat percentiles (usec): 00:33:52.812 | 1.00th=[27132], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:33:52.812 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:52.812 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:33:52.812 | 99.00th=[29230], 99.50th=[29492], 99.90th=[34866], 99.95th=[34866], 00:33:52.812 | 99.99th=[34866] 00:33:52.812 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2252.80, stdev=64.34, samples=20 00:33:52.812 iops : min= 544, max= 576, avg=563.20, stdev=16.08, samples=20 00:33:52.812 lat (msec) : 10=0.28%, 20=0.57%, 50=99.15% 00:33:52.812 cpu : usr=98.33%, sys=1.26%, ctx=17, majf=0, minf=9 00:33:52.812 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.812 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.812 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.812 filename1: (groupid=0, jobs=1): err= 0: pid=2706092: Wed Nov 27 08:16:45 2024 00:33:52.812 read: IOPS=561, BW=2246KiB/s (2300kB/s)(21.9MiB/10003msec) 00:33:52.812 slat (nsec): min=6929, max=64871, avg=20782.69, stdev=7070.74 00:33:52.812 clat (usec): min=26217, max=37769, avg=28302.39, stdev=539.86 00:33:52.812 lat (usec): min=26227, max=37803, avg=28323.17, stdev=540.20 00:33:52.812 clat percentiles (usec): 00:33:52.812 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:33:52.812 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:52.812 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:33:52.812 | 99.00th=[28967], 99.50th=[29230], 99.90th=[37487], 99.95th=[37487], 00:33:52.812 | 99.99th=[38011] 00:33:52.812 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2243.37, stdev=65.66, samples=19 00:33:52.812 iops : min= 544, max= 576, avg=560.84, stdev=16.42, samples=19 00:33:52.812 lat (msec) : 50=100.00% 00:33:52.812 cpu : usr=98.45%, sys=1.17%, ctx=16, majf=0, minf=9 00:33:52.812 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.812 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.812 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.812 filename1: (groupid=0, jobs=1): err= 0: pid=2706093: Wed Nov 27 08:16:45 2024 00:33:52.812 read: IOPS=563, BW=2254KiB/s (2308kB/s)(22.1MiB/10023msec) 00:33:52.812 slat (nsec): min=3283, max=95081, avg=15590.70, stdev=7388.53 00:33:52.812 clat (usec): min=11435, max=34441, avg=28260.53, stdev=1057.44 00:33:52.812 lat (usec): min=11463, max=34456, avg=28276.12, stdev=1053.98 00:33:52.812 clat percentiles (usec): 00:33:52.812 | 1.00th=[26608], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:33:52.812 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:33:52.812 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:33:52.812 | 99.00th=[28967], 99.50th=[29230], 99.90th=[31065], 99.95th=[31065], 00:33:52.812 | 99.99th=[34341] 00:33:52.812 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2248.55, stdev=63.58, samples=20 00:33:52.812 iops : min= 544, max= 576, avg=562.10, stdev=15.91, samples=20 00:33:52.812 lat (msec) : 20=0.73%, 50=99.27% 00:33:52.812 cpu : usr=98.59%, sys=1.03%, ctx=13, majf=0, minf=9 00:33:52.812 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.812 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.812 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.812 filename1: (groupid=0, jobs=1): err= 0: pid=2706094: Wed Nov 27 08:16:45 2024 00:33:52.812 read: IOPS=563, BW=2252KiB/s (2306kB/s)(22.0MiB/10002msec) 00:33:52.812 slat (nsec): min=4235, max=36691, avg=12652.19, stdev=4737.17 00:33:52.812 clat (usec): min=12316, max=43357, avg=28305.28, stdev=1482.62 00:33:52.812 lat (usec): min=12326, max=43366, avg=28317.93, stdev=1482.33 00:33:52.812 clat percentiles (usec): 00:33:52.812 | 1.00th=[21365], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:33:52.812 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:52.812 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:33:52.812 | 99.00th=[29230], 99.50th=[34341], 99.90th=[43254], 99.95th=[43254], 00:33:52.812 | 99.99th=[43254] 00:33:52.812 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2250.11, stdev=64.93, samples=19 00:33:52.812 iops : min= 544, max= 576, avg=562.53, stdev=16.23, samples=19 00:33:52.812 lat (msec) : 20=0.85%, 50=99.15% 00:33:52.812 cpu : usr=98.56%, sys=1.00%, ctx=14, majf=0, minf=9 00:33:52.812 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:52.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.812 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.812 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.812 filename1: (groupid=0, jobs=1): err= 0: pid=2706095: Wed Nov 27 08:16:45 2024 00:33:52.812 read: IOPS=565, BW=2260KiB/s (2315kB/s)(22.1MiB/10023msec) 00:33:52.812 slat (nsec): min=7098, max=91446, avg=19849.66, stdev=8387.85 00:33:52.812 clat (usec): min=10627, max=29602, avg=28148.49, stdev=1518.74 00:33:52.812 lat (usec): min=10640, max=29619, avg=28168.34, stdev=1518.39 00:33:52.812 clat percentiles (usec): 00:33:52.812 | 1.00th=[20317], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:33:52.812 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:33:52.812 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:33:52.812 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:33:52.812 | 99.99th=[29492] 00:33:52.812 bw ( KiB/s): min= 2176, max= 2432, per=4.19%, avg=2259.20, stdev=75.15, samples=20 00:33:52.812 iops : min= 544, max= 608, avg=564.80, stdev=18.79, samples=20 00:33:52.812 lat (msec) : 20=0.85%, 50=99.15% 00:33:52.812 cpu : usr=98.61%, sys=1.00%, ctx=18, majf=0, minf=9 00:33:52.812 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.813 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.813 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.813 filename1: (groupid=0, jobs=1): err= 0: pid=2706096: Wed Nov 27 08:16:45 2024 00:33:52.813 read: IOPS=561, BW=2245KiB/s (2299kB/s)(21.9MiB/10007msec) 00:33:52.813 slat (nsec): min=6921, max=39637, avg=17228.51, stdev=6514.71 00:33:52.813 clat (usec): min=22465, max=37701, avg=28383.29, stdev=1052.81 00:33:52.813 lat (usec): min=22478, max=37719, avg=28400.52, stdev=1052.87 00:33:52.813 clat percentiles (usec): 00:33:52.813 | 1.00th=[22938], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:33:52.813 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:52.813 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:33:52.813 | 99.00th=[33817], 99.50th=[33817], 99.90th=[37487], 99.95th=[37487], 00:33:52.813 | 99.99th=[37487] 00:33:52.813 bw ( KiB/s): min= 2144, max= 2304, per=4.16%, avg=2245.60, stdev=61.04, samples=20 00:33:52.813 iops : min= 536, max= 576, avg=561.40, stdev=15.26, samples=20 00:33:52.813 lat (msec) : 50=100.00% 00:33:52.813 cpu : usr=98.47%, sys=1.16%, ctx=17, majf=0, minf=9 00:33:52.813 IO depths : 1=0.2%, 2=5.8%, 4=24.2%, 8=57.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:33:52.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.813 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.813 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.813 filename2: (groupid=0, jobs=1): err= 0: pid=2706097: Wed Nov 27 08:16:45 2024 00:33:52.813 read: IOPS=561, BW=2246KiB/s (2300kB/s)(21.9MiB/10001msec) 00:33:52.813 slat (nsec): min=6942, max=94740, avg=40636.48, stdev=22932.22 00:33:52.813 clat (usec): min=14438, max=51484, avg=28081.63, stdev=1470.18 00:33:52.813 lat (usec): min=14447, max=51508, avg=28122.26, stdev=1470.50 00:33:52.813 clat percentiles (usec): 00:33:52.813 | 1.00th=[27657], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:52.813 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:33:52.813 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:52.813 | 99.00th=[28967], 99.50th=[29230], 99.90th=[51643], 99.95th=[51643], 00:33:52.813 | 99.99th=[51643] 00:33:52.813 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2236.84, stdev=77.78, samples=19 00:33:52.813 iops : min= 513, max= 576, avg=559.21, stdev=19.44, samples=19 00:33:52.813 lat (msec) : 20=0.28%, 50=99.43%, 100=0.28% 00:33:52.813 cpu : usr=98.65%, sys=0.95%, ctx=13, majf=0, minf=9 00:33:52.813 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.813 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.813 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.813 filename2: (groupid=0, jobs=1): err= 0: pid=2706099: Wed Nov 27 08:16:45 2024 00:33:52.813 read: IOPS=567, BW=2270KiB/s (2325kB/s)(22.2MiB/10018msec) 00:33:52.813 slat (nsec): min=6908, max=51768, avg=18793.49, stdev=6299.99 00:33:52.813 clat (usec): min=10428, max=34108, avg=28032.28, stdev=1830.97 00:33:52.813 lat (usec): min=10440, max=34123, avg=28051.07, stdev=1831.57 00:33:52.813 clat percentiles (usec): 00:33:52.813 | 1.00th=[17695], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:33:52.813 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:33:52.813 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:33:52.813 | 99.00th=[28967], 99.50th=[29230], 99.90th=[33817], 99.95th=[33817], 00:33:52.813 | 99.99th=[34341] 00:33:52.813 bw ( KiB/s): min= 2176, max= 2608, per=4.20%, avg=2268.00, stdev=101.97, samples=20 00:33:52.813 iops : min= 544, max= 652, avg=567.00, stdev=25.49, samples=20 00:33:52.813 lat (msec) : 20=1.76%, 50=98.24% 00:33:52.813 cpu : usr=98.59%, sys=1.04%, ctx=12, majf=0, minf=9 00:33:52.813 IO depths : 1=6.0%, 2=12.2%, 4=24.5%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:52.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.813 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.813 issued rwts: total=5686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.813 filename2: (groupid=0, jobs=1): err= 0: pid=2706100: Wed Nov 27 08:16:45 2024 00:33:52.813 read: IOPS=562, BW=2251KiB/s (2305kB/s)(22.0MiB/10010msec) 00:33:52.813 slat (nsec): min=6077, max=39405, avg=20535.57, stdev=5989.44 00:33:52.813 clat (usec): min=15074, max=38321, avg=28248.27, stdev=849.88 00:33:52.813 lat (usec): min=15082, max=38335, avg=28268.81, stdev=850.34 00:33:52.813 clat percentiles (usec): 00:33:52.813 | 1.00th=[27395], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:33:52.813 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:52.813 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:33:52.813 | 99.00th=[28967], 99.50th=[29230], 99.90th=[35914], 99.95th=[35914], 00:33:52.813 | 99.99th=[38536] 00:33:52.813 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2243.37, stdev=65.66, samples=19 00:33:52.813 iops : min= 544, max= 576, avg=560.84, stdev=16.42, samples=19 00:33:52.813 lat (msec) : 20=0.28%, 50=99.72% 00:33:52.813 cpu : usr=98.63%, sys=1.00%, ctx=11, majf=0, minf=9 00:33:52.813 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.813 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.813 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.813 filename2: (groupid=0, jobs=1): err= 0: pid=2706101: Wed Nov 27 08:16:45 2024 00:33:52.813 read: IOPS=561, BW=2246KiB/s (2300kB/s)(21.9MiB/10002msec) 00:33:52.813 slat (nsec): min=5636, max=94862, avg=41633.90, stdev=21867.01 00:33:52.813 clat (usec): min=14625, max=51597, avg=28122.34, stdev=1472.86 00:33:52.813 lat (usec): min=14689, max=51613, avg=28163.97, stdev=1471.64 00:33:52.813 clat percentiles (usec): 00:33:52.813 | 1.00th=[27657], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:52.813 | 30.00th=[27919], 40.00th=[27919], 50.00th=[28181], 60.00th=[28181], 00:33:52.813 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:33:52.813 | 99.00th=[28967], 99.50th=[29230], 99.90th=[51643], 99.95th=[51643], 00:33:52.813 | 99.99th=[51643] 00:33:52.813 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2236.84, stdev=77.78, samples=19 00:33:52.813 iops : min= 513, max= 576, avg=559.21, stdev=19.44, samples=19 00:33:52.813 lat (msec) : 20=0.28%, 50=99.43%, 100=0.28% 00:33:52.813 cpu : usr=98.50%, sys=1.13%, ctx=16, majf=0, minf=9 00:33:52.813 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.813 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.813 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.813 filename2: (groupid=0, jobs=1): err= 0: pid=2706102: Wed Nov 27 08:16:45 2024 00:33:52.813 read: IOPS=562, BW=2250KiB/s (2304kB/s)(22.0MiB/10011msec) 00:33:52.813 slat (nsec): min=6153, max=37124, avg=15695.41, stdev=4760.41 00:33:52.813 clat (usec): min=12629, max=39124, avg=28290.44, stdev=1093.03 00:33:52.813 lat (usec): min=12636, max=39143, avg=28306.14, stdev=1093.32 00:33:52.813 clat percentiles (usec): 00:33:52.813 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:33:52.813 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:33:52.813 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:33:52.813 | 99.00th=[29230], 99.50th=[29230], 99.90th=[39060], 99.95th=[39060], 00:33:52.813 | 99.99th=[39060] 00:33:52.813 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2243.37, stdev=65.66, samples=19 00:33:52.813 iops : min= 544, max= 576, avg=560.84, stdev=16.42, samples=19 00:33:52.813 lat (msec) : 20=0.28%, 50=99.72% 00:33:52.813 cpu : usr=98.41%, sys=1.23%, ctx=13, majf=0, minf=9 00:33:52.813 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.814 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.814 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.814 filename2: (groupid=0, jobs=1): err= 0: pid=2706103: Wed Nov 27 08:16:45 2024 00:33:52.814 read: IOPS=561, BW=2246KiB/s (2300kB/s)(21.9MiB/10003msec) 00:33:52.814 slat (nsec): min=7000, max=43046, avg=20106.80, stdev=5810.69 00:33:52.814 clat (usec): min=22840, max=37595, avg=28320.14, stdev=613.91 00:33:52.814 lat (usec): min=22858, max=37616, avg=28340.25, stdev=613.68 00:33:52.814 clat percentiles (usec): 00:33:52.814 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:33:52.814 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:33:52.814 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:33:52.814 | 99.00th=[28967], 99.50th=[29492], 99.90th=[37487], 99.95th=[37487], 00:33:52.814 | 99.99th=[37487] 00:33:52.814 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2243.37, stdev=65.66, samples=19 00:33:52.814 iops : min= 544, max= 576, avg=560.84, stdev=16.42, samples=19 00:33:52.814 lat (msec) : 50=100.00% 00:33:52.814 cpu : usr=98.62%, sys=1.00%, ctx=12, majf=0, minf=9 00:33:52.814 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.814 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.814 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.814 filename2: (groupid=0, jobs=1): err= 0: pid=2706104: Wed Nov 27 08:16:45 2024 00:33:52.814 read: IOPS=561, BW=2246KiB/s (2300kB/s)(21.9MiB/10001msec) 00:33:52.814 slat (nsec): min=7172, max=90879, avg=40892.56, stdev=22757.18 00:33:52.814 clat (usec): min=14435, max=51162, avg=28085.67, stdev=1459.09 00:33:52.814 lat (usec): min=14456, max=51200, avg=28126.56, stdev=1460.23 00:33:52.814 clat percentiles (usec): 00:33:52.814 | 1.00th=[27657], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:52.814 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:33:52.814 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:52.814 | 99.00th=[28967], 99.50th=[29230], 99.90th=[51119], 99.95th=[51119], 00:33:52.814 | 99.99th=[51119] 00:33:52.814 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2236.84, stdev=77.78, samples=19 00:33:52.814 iops : min= 513, max= 576, avg=559.21, stdev=19.44, samples=19 00:33:52.814 lat (msec) : 20=0.28%, 50=99.43%, 100=0.28% 00:33:52.814 cpu : usr=98.68%, sys=0.94%, ctx=14, majf=0, minf=9 00:33:52.814 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.814 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.814 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.814 filename2: (groupid=0, jobs=1): err= 0: pid=2706105: Wed Nov 27 08:16:45 2024 00:33:52.814 read: IOPS=561, BW=2247KiB/s (2301kB/s)(21.9MiB/10001msec) 00:33:52.814 slat (nsec): min=5588, max=88842, avg=30825.95, stdev=13583.11 00:33:52.814 clat (usec): min=749, max=50599, avg=28197.20, stdev=1542.49 00:33:52.814 lat (usec): min=757, max=50636, avg=28228.03, stdev=1542.76 00:33:52.814 clat percentiles (usec): 00:33:52.814 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:33:52.814 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:52.814 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:33:52.814 | 99.00th=[28967], 99.50th=[29230], 99.90th=[50594], 99.95th=[50594], 00:33:52.814 | 99.99th=[50594] 00:33:52.814 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2236.63, stdev=78.31, samples=19 00:33:52.814 iops : min= 512, max= 576, avg=559.16, stdev=19.58, samples=19 00:33:52.814 lat (usec) : 750=0.02%, 1000=0.04% 00:33:52.814 lat (msec) : 20=0.28%, 50=99.38%, 100=0.28% 00:33:52.814 cpu : usr=98.62%, sys=1.00%, ctx=33, majf=0, minf=9 00:33:52.814 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.814 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.814 issued rwts: total=5619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.814 00:33:52.814 Run status group 0 (all jobs): 00:33:52.814 READ: bw=52.7MiB/s (55.3MB/s), 2245KiB/s-2270KiB/s (2299kB/s-2325kB/s), io=528MiB (554MB), run=10001-10023msec 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:52.814 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.815 bdev_null0 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.815 [2024-11-27 08:16:45.540759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.815 bdev_null1 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:52.815 { 00:33:52.815 "params": { 00:33:52.815 "name": "Nvme$subsystem", 00:33:52.815 "trtype": "$TEST_TRANSPORT", 00:33:52.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.815 "adrfam": "ipv4", 00:33:52.815 "trsvcid": "$NVMF_PORT", 00:33:52.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.815 "hdgst": ${hdgst:-false}, 00:33:52.815 "ddgst": ${ddgst:-false} 00:33:52.815 }, 00:33:52.815 "method": "bdev_nvme_attach_controller" 00:33:52.815 } 00:33:52.815 EOF 00:33:52.815 )") 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:52.815 { 00:33:52.815 "params": { 00:33:52.815 "name": "Nvme$subsystem", 00:33:52.815 "trtype": "$TEST_TRANSPORT", 00:33:52.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.815 "adrfam": "ipv4", 00:33:52.815 "trsvcid": "$NVMF_PORT", 00:33:52.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.815 "hdgst": ${hdgst:-false}, 00:33:52.815 "ddgst": ${ddgst:-false} 00:33:52.815 }, 00:33:52.815 "method": "bdev_nvme_attach_controller" 00:33:52.815 } 00:33:52.815 EOF 00:33:52.815 )") 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:52.815 08:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:52.816 08:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:52.816 08:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:52.816 "params": { 00:33:52.816 "name": "Nvme0", 00:33:52.816 "trtype": "tcp", 00:33:52.816 "traddr": "10.0.0.2", 00:33:52.816 "adrfam": "ipv4", 00:33:52.816 "trsvcid": "4420", 00:33:52.816 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:52.816 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:52.816 "hdgst": false, 00:33:52.816 "ddgst": false 00:33:52.816 }, 00:33:52.816 "method": "bdev_nvme_attach_controller" 00:33:52.816 },{ 00:33:52.816 "params": { 00:33:52.816 "name": "Nvme1", 00:33:52.816 "trtype": "tcp", 00:33:52.816 "traddr": "10.0.0.2", 00:33:52.816 "adrfam": "ipv4", 00:33:52.816 "trsvcid": "4420", 00:33:52.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:52.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:52.816 "hdgst": false, 00:33:52.816 "ddgst": false 00:33:52.816 }, 00:33:52.816 "method": "bdev_nvme_attach_controller" 00:33:52.816 }' 00:33:52.816 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:52.816 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:52.816 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:52.816 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.816 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:52.816 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:52.816 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:52.816 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:52.816 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:52.816 08:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.816 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:52.816 ... 00:33:52.816 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:52.816 ... 00:33:52.816 fio-3.35 00:33:52.816 Starting 4 threads 00:33:58.195 00:33:58.195 filename0: (groupid=0, jobs=1): err= 0: pid=2708034: Wed Nov 27 08:16:51 2024 00:33:58.195 read: IOPS=2430, BW=19.0MiB/s (19.9MB/s)(95.0MiB/5001msec) 00:33:58.195 slat (nsec): min=6237, max=49380, avg=9329.46, stdev=3487.02 00:33:58.195 clat (usec): min=612, max=5778, avg=3263.55, stdev=580.30 00:33:58.195 lat (usec): min=618, max=5795, avg=3272.88, stdev=579.86 00:33:58.195 clat percentiles (usec): 00:33:58.195 | 1.00th=[ 2212], 5.00th=[ 2606], 10.00th=[ 2737], 20.00th=[ 2900], 00:33:58.195 | 30.00th=[ 2999], 40.00th=[ 3064], 50.00th=[ 3130], 60.00th=[ 3195], 00:33:58.195 | 70.00th=[ 3326], 80.00th=[ 3523], 90.00th=[ 4080], 95.00th=[ 4686], 00:33:58.195 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 5473], 99.95th=[ 5604], 00:33:58.195 | 99.99th=[ 5735] 00:33:58.195 bw ( KiB/s): min=18608, max=20128, per=23.66%, avg=19458.56, stdev=452.82, samples=9 00:33:58.195 iops : min= 2326, max= 2516, avg=2432.22, stdev=56.59, samples=9 00:33:58.195 lat (usec) : 750=0.05%, 1000=0.03% 00:33:58.195 lat (msec) : 2=0.44%, 4=88.79%, 10=10.69% 00:33:58.195 cpu : usr=95.74%, sys=3.92%, ctx=10, majf=0, minf=9 00:33:58.195 IO depths : 1=0.1%, 2=3.4%, 4=68.9%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.195 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.195 issued rwts: total=12157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.195 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:58.195 filename0: (groupid=0, jobs=1): err= 0: pid=2708035: Wed Nov 27 08:16:51 2024 00:33:58.195 read: IOPS=2530, BW=19.8MiB/s (20.7MB/s)(98.9MiB/5002msec) 00:33:58.195 slat (nsec): min=6186, max=45064, avg=9348.30, stdev=3424.61 00:33:58.195 clat (usec): min=655, max=6655, avg=3133.93, stdev=562.70 00:33:58.195 lat (usec): min=665, max=6662, avg=3143.28, stdev=562.40 00:33:58.195 clat percentiles (usec): 00:33:58.195 | 1.00th=[ 2057], 5.00th=[ 2343], 10.00th=[ 2573], 20.00th=[ 2769], 00:33:58.195 | 30.00th=[ 2868], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3130], 00:33:58.195 | 70.00th=[ 3228], 80.00th=[ 3392], 90.00th=[ 3785], 95.00th=[ 4359], 00:33:58.195 | 99.00th=[ 5014], 99.50th=[ 5211], 99.90th=[ 5538], 99.95th=[ 5669], 00:33:58.195 | 99.99th=[ 6652] 00:33:58.195 bw ( KiB/s): min=19456, max=21824, per=24.69%, avg=20298.67, stdev=723.24, samples=9 00:33:58.195 iops : min= 2432, max= 2728, avg=2537.33, stdev=90.40, samples=9 00:33:58.195 lat (usec) : 750=0.01% 00:33:58.195 lat (msec) : 2=0.83%, 4=91.61%, 10=7.55% 00:33:58.195 cpu : usr=95.40%, sys=4.26%, ctx=13, majf=0, minf=9 00:33:58.195 IO depths : 1=0.3%, 2=4.1%, 4=67.6%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.195 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.195 issued rwts: total=12657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.195 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:58.195 filename1: (groupid=0, jobs=1): err= 0: pid=2708036: Wed Nov 27 08:16:51 2024 00:33:58.195 read: IOPS=2769, BW=21.6MiB/s (22.7MB/s)(108MiB/5001msec) 00:33:58.195 slat (nsec): min=4220, max=36822, avg=9188.74, stdev=3192.65 00:33:58.195 clat (usec): min=724, max=5876, avg=2860.89, stdev=519.29 00:33:58.195 lat (usec): min=734, max=5887, avg=2870.08, stdev=519.43 00:33:58.195 clat percentiles (usec): 00:33:58.195 | 1.00th=[ 1729], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2474], 00:33:58.195 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2999], 00:33:58.195 | 70.00th=[ 3097], 80.00th=[ 3163], 90.00th=[ 3326], 95.00th=[ 3752], 00:33:58.195 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 5407], 99.95th=[ 5604], 00:33:58.195 | 99.99th=[ 5866] 00:33:58.195 bw ( KiB/s): min=20224, max=23616, per=26.98%, avg=22181.78, stdev=1049.02, samples=9 00:33:58.195 iops : min= 2528, max= 2952, avg=2772.67, stdev=131.14, samples=9 00:33:58.195 lat (usec) : 750=0.01%, 1000=0.49% 00:33:58.195 lat (msec) : 2=1.54%, 4=94.74%, 10=3.22% 00:33:58.195 cpu : usr=95.36%, sys=4.28%, ctx=10, majf=0, minf=9 00:33:58.195 IO depths : 1=0.2%, 2=8.1%, 4=62.5%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.195 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.195 issued rwts: total=13851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.195 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:58.195 filename1: (groupid=0, jobs=1): err= 0: pid=2708038: Wed Nov 27 08:16:51 2024 00:33:58.195 read: IOPS=2549, BW=19.9MiB/s (20.9MB/s)(99.6MiB/5001msec) 00:33:58.195 slat (nsec): min=6242, max=51918, avg=9574.94, stdev=3604.05 00:33:58.195 clat (usec): min=631, max=5739, avg=3109.76, stdev=547.54 00:33:58.195 lat (usec): min=644, max=5752, avg=3119.34, stdev=547.26 00:33:58.195 clat percentiles (usec): 00:33:58.195 | 1.00th=[ 2024], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2737], 00:33:58.195 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 3064], 60.00th=[ 3130], 00:33:58.195 | 70.00th=[ 3195], 80.00th=[ 3359], 90.00th=[ 3785], 95.00th=[ 4228], 00:33:58.195 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 5342], 99.95th=[ 5407], 00:33:58.195 | 99.99th=[ 5669] 00:33:58.195 bw ( KiB/s): min=19832, max=20848, per=24.74%, avg=20344.00, stdev=332.84, samples=9 00:33:58.195 iops : min= 2479, max= 2606, avg=2543.00, stdev=41.61, samples=9 00:33:58.195 lat (usec) : 750=0.01%, 1000=0.02% 00:33:58.195 lat (msec) : 2=0.87%, 4=91.82%, 10=7.28% 00:33:58.195 cpu : usr=96.04%, sys=3.64%, ctx=10, majf=0, minf=9 00:33:58.195 IO depths : 1=0.2%, 2=5.0%, 4=66.7%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.195 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.195 issued rwts: total=12748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.195 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:58.195 00:33:58.195 Run status group 0 (all jobs): 00:33:58.195 READ: bw=80.3MiB/s (84.2MB/s), 19.0MiB/s-21.6MiB/s (19.9MB/s-22.7MB/s), io=402MiB (421MB), run=5001-5002msec 00:33:58.195 08:16:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:58.195 08:16:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:58.195 08:16:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:58.195 08:16:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:58.195 08:16:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:58.195 08:16:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:58.195 08:16:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.195 08:16:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.195 08:16:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.195 08:16:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:58.195 08:16:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.195 08:16:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.196 08:16:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.196 08:16:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:58.196 08:16:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:58.196 08:16:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:58.196 08:16:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:58.196 08:16:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.196 08:16:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.196 08:16:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.196 08:16:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:58.196 08:16:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.196 08:16:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.196 08:16:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.196 00:33:58.196 real 0m24.340s 00:33:58.196 user 4m51.933s 00:33:58.196 sys 0m5.249s 00:33:58.196 08:16:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:58.196 08:16:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.196 ************************************ 00:33:58.196 END TEST fio_dif_rand_params 00:33:58.196 ************************************ 00:33:58.196 08:16:52 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:58.196 08:16:52 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:58.196 08:16:52 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:58.196 08:16:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:58.196 ************************************ 00:33:58.196 START TEST fio_dif_digest 00:33:58.196 ************************************ 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:58.196 bdev_null0 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:58.196 [2024-11-27 08:16:52.160065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:58.196 { 00:33:58.196 "params": { 00:33:58.196 "name": "Nvme$subsystem", 00:33:58.196 "trtype": "$TEST_TRANSPORT", 00:33:58.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:58.196 "adrfam": "ipv4", 00:33:58.196 "trsvcid": "$NVMF_PORT", 00:33:58.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:58.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:58.196 "hdgst": ${hdgst:-false}, 00:33:58.196 "ddgst": ${ddgst:-false} 00:33:58.196 }, 00:33:58.196 "method": "bdev_nvme_attach_controller" 00:33:58.196 } 00:33:58.196 EOF 00:33:58.196 )") 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:58.196 "params": { 00:33:58.196 "name": "Nvme0", 00:33:58.196 "trtype": "tcp", 00:33:58.196 "traddr": "10.0.0.2", 00:33:58.196 "adrfam": "ipv4", 00:33:58.196 "trsvcid": "4420", 00:33:58.196 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:58.196 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:58.196 "hdgst": true, 00:33:58.196 "ddgst": true 00:33:58.196 }, 00:33:58.196 "method": "bdev_nvme_attach_controller" 00:33:58.196 }' 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:58.196 08:16:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:58.453 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:58.453 ... 00:33:58.453 fio-3.35 00:33:58.453 Starting 3 threads 00:34:10.665 00:34:10.665 filename0: (groupid=0, jobs=1): err= 0: pid=2709222: Wed Nov 27 08:17:03 2024 00:34:10.665 read: IOPS=306, BW=38.3MiB/s (40.2MB/s)(385MiB/10047msec) 00:34:10.665 slat (nsec): min=6582, max=42422, avg=17903.67, stdev=6963.65 00:34:10.665 clat (usec): min=7261, max=50305, avg=9759.15, stdev=1226.81 00:34:10.665 lat (usec): min=7273, max=50327, avg=9777.06, stdev=1227.10 00:34:10.665 clat percentiles (usec): 00:34:10.665 | 1.00th=[ 8160], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9110], 00:34:10.665 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:34:10.665 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:34:10.665 | 99.00th=[11338], 99.50th=[11600], 99.90th=[13042], 99.95th=[47973], 00:34:10.665 | 99.99th=[50070] 00:34:10.665 bw ( KiB/s): min=37376, max=40960, per=37.81%, avg=39372.80, stdev=741.03, samples=20 00:34:10.665 iops : min= 292, max= 320, avg=307.60, stdev= 5.79, samples=20 00:34:10.665 lat (msec) : 10=64.72%, 20=35.22%, 50=0.03%, 100=0.03% 00:34:10.665 cpu : usr=95.95%, sys=3.73%, ctx=26, majf=0, minf=64 00:34:10.665 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.665 issued rwts: total=3078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.665 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:10.665 filename0: (groupid=0, jobs=1): err= 0: pid=2709223: Wed Nov 27 08:17:03 2024 00:34:10.665 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(325MiB/10003msec) 00:34:10.665 slat (nsec): min=6524, max=45108, avg=17358.66, stdev=7857.01 00:34:10.665 clat (usec): min=4759, max=14492, avg=11539.77, stdev=790.92 00:34:10.665 lat (usec): min=4774, max=14505, avg=11557.13, stdev=791.27 00:34:10.665 clat percentiles (usec): 00:34:10.665 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:34:10.665 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:34:10.665 | 70.00th=[11863], 80.00th=[12256], 90.00th=[12518], 95.00th=[12780], 00:34:10.665 | 99.00th=[13435], 99.50th=[13698], 99.90th=[14091], 99.95th=[14222], 00:34:10.665 | 99.99th=[14484] 00:34:10.665 bw ( KiB/s): min=32256, max=34816, per=31.89%, avg=33212.63, stdev=558.54, samples=19 00:34:10.665 iops : min= 252, max= 272, avg=259.47, stdev= 4.36, samples=19 00:34:10.665 lat (msec) : 10=1.85%, 20=98.15% 00:34:10.665 cpu : usr=96.34%, sys=3.35%, ctx=14, majf=0, minf=24 00:34:10.665 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.665 issued rwts: total=2596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.665 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:10.665 filename0: (groupid=0, jobs=1): err= 0: pid=2709224: Wed Nov 27 08:17:03 2024 00:34:10.665 read: IOPS=248, BW=31.1MiB/s (32.6MB/s)(313MiB/10045msec) 00:34:10.665 slat (nsec): min=6630, max=65898, avg=18722.00, stdev=8980.15 00:34:10.665 clat (usec): min=9386, max=52325, avg=12015.10, stdev=1368.17 00:34:10.665 lat (usec): min=9398, max=52339, avg=12033.82, stdev=1367.92 00:34:10.665 clat percentiles (usec): 00:34:10.665 | 1.00th=[10028], 5.00th=[10683], 10.00th=[10945], 20.00th=[11338], 00:34:10.665 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11994], 60.00th=[12125], 00:34:10.665 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13042], 95.00th=[13435], 00:34:10.665 | 99.00th=[14353], 99.50th=[14746], 99.90th=[15270], 99.95th=[47449], 00:34:10.665 | 99.99th=[52167] 00:34:10.665 bw ( KiB/s): min=30464, max=33536, per=30.70%, avg=31974.40, stdev=669.11, samples=20 00:34:10.665 iops : min= 238, max= 262, avg=249.80, stdev= 5.23, samples=20 00:34:10.665 lat (msec) : 10=0.92%, 20=99.00%, 50=0.04%, 100=0.04% 00:34:10.665 cpu : usr=87.53%, sys=7.36%, ctx=812, majf=0, minf=75 00:34:10.665 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.665 issued rwts: total=2500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.665 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:10.665 00:34:10.665 Run status group 0 (all jobs): 00:34:10.665 READ: bw=102MiB/s (107MB/s), 31.1MiB/s-38.3MiB/s (32.6MB/s-40.2MB/s), io=1022MiB (1071MB), run=10003-10047msec 00:34:10.665 08:17:03 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:10.665 08:17:03 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:10.665 08:17:03 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:10.665 08:17:03 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:10.665 08:17:03 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:10.665 08:17:03 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:10.665 08:17:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.665 08:17:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:10.665 08:17:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.665 08:17:03 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:10.665 08:17:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.665 08:17:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:10.666 08:17:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.666 00:34:10.666 real 0m11.282s 00:34:10.666 user 0m34.796s 00:34:10.666 sys 0m1.822s 00:34:10.666 08:17:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:10.666 08:17:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:10.666 ************************************ 00:34:10.666 END TEST fio_dif_digest 00:34:10.666 ************************************ 00:34:10.666 08:17:03 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:10.666 08:17:03 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:10.666 08:17:03 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:10.666 08:17:03 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:10.666 08:17:03 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:10.666 08:17:03 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:10.666 08:17:03 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:10.666 08:17:03 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:10.666 rmmod nvme_tcp 00:34:10.666 rmmod nvme_fabrics 00:34:10.666 rmmod nvme_keyring 00:34:10.666 08:17:03 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:10.666 08:17:03 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:10.666 08:17:03 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:10.666 08:17:03 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2700626 ']' 00:34:10.666 08:17:03 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2700626 00:34:10.666 08:17:03 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2700626 ']' 00:34:10.666 08:17:03 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2700626 00:34:10.666 08:17:03 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:10.666 08:17:03 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:10.666 08:17:03 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2700626 00:34:10.666 08:17:03 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:10.666 08:17:03 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:10.666 08:17:03 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2700626' 00:34:10.666 killing process with pid 2700626 00:34:10.666 08:17:03 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2700626 00:34:10.666 08:17:03 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2700626 00:34:10.666 08:17:03 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:10.666 08:17:03 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:12.575 Waiting for block devices as requested 00:34:12.575 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:12.575 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:12.575 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:12.575 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:12.834 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:12.834 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:12.834 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:12.834 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:13.093 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:13.093 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:13.093 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:13.353 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:13.353 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:13.353 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:13.353 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:13.612 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:13.612 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:13.612 08:17:07 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:13.612 08:17:07 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:13.612 08:17:07 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:13.612 08:17:07 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:13.612 08:17:07 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:13.612 08:17:07 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:13.612 08:17:07 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:13.612 08:17:07 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:13.612 08:17:07 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.612 08:17:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:13.612 08:17:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.145 08:17:09 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:16.145 00:34:16.145 real 1m13.118s 00:34:16.145 user 7m8.260s 00:34:16.145 sys 0m19.836s 00:34:16.145 08:17:09 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:16.145 08:17:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:16.145 ************************************ 00:34:16.145 END TEST nvmf_dif 00:34:16.145 ************************************ 00:34:16.145 08:17:09 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:16.145 08:17:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:16.145 08:17:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:16.145 08:17:09 -- common/autotest_common.sh@10 -- # set +x 00:34:16.145 ************************************ 00:34:16.145 START TEST nvmf_abort_qd_sizes 00:34:16.145 ************************************ 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:16.145 * Looking for test storage... 00:34:16.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:16.145 08:17:09 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:16.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.145 --rc genhtml_branch_coverage=1 00:34:16.145 --rc genhtml_function_coverage=1 00:34:16.145 --rc genhtml_legend=1 00:34:16.145 --rc geninfo_all_blocks=1 00:34:16.145 --rc geninfo_unexecuted_blocks=1 00:34:16.145 00:34:16.145 ' 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:16.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.145 --rc genhtml_branch_coverage=1 00:34:16.145 --rc genhtml_function_coverage=1 00:34:16.145 --rc genhtml_legend=1 00:34:16.145 --rc geninfo_all_blocks=1 00:34:16.145 --rc geninfo_unexecuted_blocks=1 00:34:16.145 00:34:16.145 ' 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:16.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.145 --rc genhtml_branch_coverage=1 00:34:16.145 --rc genhtml_function_coverage=1 00:34:16.145 --rc genhtml_legend=1 00:34:16.145 --rc geninfo_all_blocks=1 00:34:16.145 --rc geninfo_unexecuted_blocks=1 00:34:16.145 00:34:16.145 ' 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:16.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.145 --rc genhtml_branch_coverage=1 00:34:16.145 --rc genhtml_function_coverage=1 00:34:16.145 --rc genhtml_legend=1 00:34:16.145 --rc geninfo_all_blocks=1 00:34:16.145 --rc geninfo_unexecuted_blocks=1 00:34:16.145 00:34:16.145 ' 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:16.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:16.145 08:17:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:21.422 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:21.422 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.422 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:21.422 Found net devices under 0000:86:00.0: cvl_0_0 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:21.423 Found net devices under 0000:86:00.1: cvl_0_1 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:21.423 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:21.683 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:21.683 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:21.683 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:21.683 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:21.683 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:21.683 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:21.683 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:21.683 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:21.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:21.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:34:21.683 00:34:21.683 --- 10.0.0.2 ping statistics --- 00:34:21.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.683 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:34:21.683 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:21.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:21.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:34:21.683 00:34:21.683 --- 10.0.0.1 ping statistics --- 00:34:21.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.683 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:34:21.683 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:21.683 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:21.683 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:21.683 08:17:15 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:24.979 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:24.979 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:24.979 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:24.979 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:24.979 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:24.979 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:24.979 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:24.979 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:24.979 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:24.979 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:24.979 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:24.979 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:24.979 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:24.979 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:24.979 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:24.979 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:25.548 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2717145 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2717145 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2717145 ']' 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:25.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:25.548 08:17:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:25.548 [2024-11-27 08:17:19.577363] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:34:25.548 [2024-11-27 08:17:19.577410] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:25.548 [2024-11-27 08:17:19.643935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:25.808 [2024-11-27 08:17:19.689278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:25.808 [2024-11-27 08:17:19.689315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:25.808 [2024-11-27 08:17:19.689322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:25.808 [2024-11-27 08:17:19.689328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:25.808 [2024-11-27 08:17:19.689333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:25.808 [2024-11-27 08:17:19.693969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:25.808 [2024-11-27 08:17:19.693986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:25.808 [2024-11-27 08:17:19.694093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:25.808 [2024-11-27 08:17:19.694095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:25.808 08:17:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:25.808 ************************************ 00:34:25.808 START TEST spdk_target_abort 00:34:25.808 ************************************ 00:34:25.808 08:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:25.808 08:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:25.808 08:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:25.808 08:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.808 08:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.099 spdk_targetn1 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.099 [2024-11-27 08:17:22.721664] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.099 [2024-11-27 08:17:22.761960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:29.099 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:29.100 08:17:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:32.389 Initializing NVMe Controllers 00:34:32.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:32.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:32.389 Initialization complete. Launching workers. 00:34:32.389 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15905, failed: 0 00:34:32.389 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1271, failed to submit 14634 00:34:32.389 success 742, unsuccessful 529, failed 0 00:34:32.389 08:17:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:32.389 08:17:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:35.680 Initializing NVMe Controllers 00:34:35.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:35.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:35.681 Initialization complete. Launching workers. 00:34:35.681 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8528, failed: 0 00:34:35.681 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1238, failed to submit 7290 00:34:35.681 success 326, unsuccessful 912, failed 0 00:34:35.681 08:17:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:35.681 08:17:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:38.972 Initializing NVMe Controllers 00:34:38.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:38.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:38.972 Initialization complete. Launching workers. 00:34:38.972 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38032, failed: 0 00:34:38.972 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2825, failed to submit 35207 00:34:38.972 success 582, unsuccessful 2243, failed 0 00:34:38.972 08:17:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:38.972 08:17:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.972 08:17:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.972 08:17:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.972 08:17:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:38.972 08:17:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.972 08:17:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:39.910 08:17:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.910 08:17:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2717145 00:34:39.910 08:17:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2717145 ']' 00:34:39.910 08:17:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2717145 00:34:39.910 08:17:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:39.910 08:17:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:39.910 08:17:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2717145 00:34:39.910 08:17:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:39.910 08:17:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:39.910 08:17:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2717145' 00:34:39.910 killing process with pid 2717145 00:34:39.910 08:17:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2717145 00:34:39.910 08:17:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2717145 00:34:40.169 00:34:40.169 real 0m14.257s 00:34:40.169 user 0m54.229s 00:34:40.169 sys 0m2.749s 00:34:40.169 08:17:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:40.169 08:17:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:40.169 ************************************ 00:34:40.169 END TEST spdk_target_abort 00:34:40.169 ************************************ 00:34:40.169 08:17:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:40.169 08:17:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:40.169 08:17:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:40.169 08:17:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:40.169 ************************************ 00:34:40.169 START TEST kernel_target_abort 00:34:40.169 ************************************ 00:34:40.169 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:40.169 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:40.169 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:40.169 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.169 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.169 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.169 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.169 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.169 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.169 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.170 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.170 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.170 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:40.170 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:40.170 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:40.170 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:40.170 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:40.170 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:40.170 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:40.170 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:40.170 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:40.170 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:40.170 08:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:42.706 Waiting for block devices as requested 00:34:42.707 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:42.966 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:42.966 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:42.966 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:43.226 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:43.226 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:43.227 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:43.227 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:43.486 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:43.486 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:43.486 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:43.486 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:43.745 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:43.745 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:43.745 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:43.745 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:44.005 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:44.005 No valid GPT data, bailing 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:44.005 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:44.265 00:34:44.265 Discovery Log Number of Records 2, Generation counter 2 00:34:44.265 =====Discovery Log Entry 0====== 00:34:44.265 trtype: tcp 00:34:44.265 adrfam: ipv4 00:34:44.265 subtype: current discovery subsystem 00:34:44.265 treq: not specified, sq flow control disable supported 00:34:44.265 portid: 1 00:34:44.265 trsvcid: 4420 00:34:44.265 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:44.265 traddr: 10.0.0.1 00:34:44.265 eflags: none 00:34:44.265 sectype: none 00:34:44.265 =====Discovery Log Entry 1====== 00:34:44.265 trtype: tcp 00:34:44.265 adrfam: ipv4 00:34:44.265 subtype: nvme subsystem 00:34:44.265 treq: not specified, sq flow control disable supported 00:34:44.265 portid: 1 00:34:44.265 trsvcid: 4420 00:34:44.265 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:44.265 traddr: 10.0.0.1 00:34:44.265 eflags: none 00:34:44.265 sectype: none 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:44.265 08:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:47.560 Initializing NVMe Controllers 00:34:47.560 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:47.560 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:47.560 Initialization complete. Launching workers. 00:34:47.560 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 89262, failed: 0 00:34:47.560 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 89262, failed to submit 0 00:34:47.560 success 0, unsuccessful 89262, failed 0 00:34:47.560 08:17:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:47.560 08:17:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:50.848 Initializing NVMe Controllers 00:34:50.848 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:50.848 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:50.848 Initialization complete. Launching workers. 00:34:50.848 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 143730, failed: 0 00:34:50.848 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36090, failed to submit 107640 00:34:50.848 success 0, unsuccessful 36090, failed 0 00:34:50.848 08:17:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:50.848 08:17:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:53.381 Initializing NVMe Controllers 00:34:53.381 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:53.381 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:53.381 Initialization complete. Launching workers. 00:34:53.381 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 135514, failed: 0 00:34:53.381 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33946, failed to submit 101568 00:34:53.381 success 0, unsuccessful 33946, failed 0 00:34:53.381 08:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:53.381 08:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:53.381 08:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:53.639 08:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:53.640 08:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:53.640 08:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:53.640 08:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:53.640 08:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:53.640 08:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:53.640 08:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:56.169 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:56.169 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:56.169 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:56.169 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:56.169 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:56.169 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:56.169 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:56.169 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:56.169 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:56.169 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:56.169 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:56.169 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:56.169 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:56.169 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:56.169 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:56.169 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:56.741 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:56.741 00:34:56.741 real 0m16.545s 00:34:56.741 user 0m8.560s 00:34:56.742 sys 0m4.494s 00:34:56.742 08:17:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:56.742 08:17:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:56.742 ************************************ 00:34:56.742 END TEST kernel_target_abort 00:34:56.742 ************************************ 00:34:56.742 08:17:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:56.742 08:17:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:56.742 08:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:56.742 08:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:56.742 08:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:56.742 08:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:56.742 08:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:56.742 08:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:56.742 rmmod nvme_tcp 00:34:56.742 rmmod nvme_fabrics 00:34:56.742 rmmod nvme_keyring 00:34:56.742 08:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:57.000 08:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:57.000 08:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:57.000 08:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2717145 ']' 00:34:57.000 08:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2717145 00:34:57.000 08:17:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2717145 ']' 00:34:57.000 08:17:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2717145 00:34:57.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2717145) - No such process 00:34:57.000 08:17:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2717145 is not found' 00:34:57.000 Process with pid 2717145 is not found 00:34:57.000 08:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:57.000 08:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:58.903 Waiting for block devices as requested 00:34:58.903 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:59.163 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:59.163 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:59.163 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:59.422 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:59.422 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:59.422 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:59.422 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:59.690 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:59.690 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:59.691 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:59.691 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:00.044 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:00.044 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:00.044 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:00.044 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:00.379 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:00.379 08:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:00.379 08:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:00.379 08:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:00.379 08:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:00.379 08:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:00.379 08:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:00.379 08:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:00.379 08:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:00.379 08:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:00.379 08:17:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:00.379 08:17:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.284 08:17:56 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:02.284 00:35:02.284 real 0m46.503s 00:35:02.284 user 1m6.701s 00:35:02.284 sys 0m15.448s 00:35:02.284 08:17:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:02.284 08:17:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:02.284 ************************************ 00:35:02.284 END TEST nvmf_abort_qd_sizes 00:35:02.284 ************************************ 00:35:02.284 08:17:56 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:02.284 08:17:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:02.284 08:17:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:02.284 08:17:56 -- common/autotest_common.sh@10 -- # set +x 00:35:02.284 ************************************ 00:35:02.284 START TEST keyring_file 00:35:02.284 ************************************ 00:35:02.284 08:17:56 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:02.544 * Looking for test storage... 00:35:02.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:02.544 08:17:56 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:02.544 08:17:56 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:35:02.544 08:17:56 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:02.544 08:17:56 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:02.544 08:17:56 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:02.544 08:17:56 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:02.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.544 --rc genhtml_branch_coverage=1 00:35:02.544 --rc genhtml_function_coverage=1 00:35:02.544 --rc genhtml_legend=1 00:35:02.544 --rc geninfo_all_blocks=1 00:35:02.544 --rc geninfo_unexecuted_blocks=1 00:35:02.544 00:35:02.544 ' 00:35:02.544 08:17:56 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:02.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.544 --rc genhtml_branch_coverage=1 00:35:02.544 --rc genhtml_function_coverage=1 00:35:02.544 --rc genhtml_legend=1 00:35:02.544 --rc geninfo_all_blocks=1 00:35:02.544 --rc geninfo_unexecuted_blocks=1 00:35:02.544 00:35:02.544 ' 00:35:02.544 08:17:56 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:02.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.544 --rc genhtml_branch_coverage=1 00:35:02.544 --rc genhtml_function_coverage=1 00:35:02.544 --rc genhtml_legend=1 00:35:02.544 --rc geninfo_all_blocks=1 00:35:02.544 --rc geninfo_unexecuted_blocks=1 00:35:02.544 00:35:02.544 ' 00:35:02.544 08:17:56 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:02.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.544 --rc genhtml_branch_coverage=1 00:35:02.544 --rc genhtml_function_coverage=1 00:35:02.544 --rc genhtml_legend=1 00:35:02.544 --rc geninfo_all_blocks=1 00:35:02.544 --rc geninfo_unexecuted_blocks=1 00:35:02.544 00:35:02.544 ' 00:35:02.544 08:17:56 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:02.544 08:17:56 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:02.544 08:17:56 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:02.544 08:17:56 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.544 08:17:56 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.544 08:17:56 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.544 08:17:56 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:02.544 08:17:56 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:02.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:02.544 08:17:56 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:02.544 08:17:56 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:02.544 08:17:56 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:02.544 08:17:56 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:02.544 08:17:56 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:02.544 08:17:56 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:02.544 08:17:56 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:02.544 08:17:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:02.544 08:17:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:02.544 08:17:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:02.544 08:17:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:02.544 08:17:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:02.544 08:17:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wmNYFFep8w 00:35:02.544 08:17:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:02.544 08:17:56 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:02.544 08:17:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wmNYFFep8w 00:35:02.545 08:17:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wmNYFFep8w 00:35:02.545 08:17:56 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.wmNYFFep8w 00:35:02.545 08:17:56 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:02.545 08:17:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:02.545 08:17:56 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:02.545 08:17:56 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:02.545 08:17:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:02.545 08:17:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:02.545 08:17:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.v0Jq0CqzQG 00:35:02.545 08:17:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:02.545 08:17:56 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:02.545 08:17:56 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:02.545 08:17:56 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:02.545 08:17:56 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:02.545 08:17:56 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:02.545 08:17:56 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:02.804 08:17:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.v0Jq0CqzQG 00:35:02.804 08:17:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.v0Jq0CqzQG 00:35:02.804 08:17:56 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.v0Jq0CqzQG 00:35:02.804 08:17:56 keyring_file -- keyring/file.sh@30 -- # tgtpid=2725689 00:35:02.804 08:17:56 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2725689 00:35:02.804 08:17:56 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:02.804 08:17:56 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2725689 ']' 00:35:02.804 08:17:56 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:02.804 08:17:56 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:02.804 08:17:56 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:02.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:02.804 08:17:56 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:02.804 08:17:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:02.804 [2024-11-27 08:17:56.742593] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:35:02.804 [2024-11-27 08:17:56.742645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2725689 ] 00:35:02.804 [2024-11-27 08:17:56.804564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.804 [2024-11-27 08:17:56.847364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.063 08:17:57 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:03.063 08:17:57 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:03.063 08:17:57 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:03.063 08:17:57 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.063 08:17:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:03.063 [2024-11-27 08:17:57.069993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:03.064 null0 00:35:03.064 [2024-11-27 08:17:57.102048] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:03.064 [2024-11-27 08:17:57.102417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.064 08:17:57 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:03.064 [2024-11-27 08:17:57.130107] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:03.064 request: 00:35:03.064 { 00:35:03.064 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:03.064 "secure_channel": false, 00:35:03.064 "listen_address": { 00:35:03.064 "trtype": "tcp", 00:35:03.064 "traddr": "127.0.0.1", 00:35:03.064 "trsvcid": "4420" 00:35:03.064 }, 00:35:03.064 "method": "nvmf_subsystem_add_listener", 00:35:03.064 "req_id": 1 00:35:03.064 } 00:35:03.064 Got JSON-RPC error response 00:35:03.064 response: 00:35:03.064 { 00:35:03.064 "code": -32602, 00:35:03.064 "message": "Invalid parameters" 00:35:03.064 } 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:03.064 08:17:57 keyring_file -- keyring/file.sh@47 -- # bperfpid=2725789 00:35:03.064 08:17:57 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2725789 /var/tmp/bperf.sock 00:35:03.064 08:17:57 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2725789 ']' 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:03.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:03.064 08:17:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:03.323 [2024-11-27 08:17:57.184067] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:35:03.323 [2024-11-27 08:17:57.184111] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2725789 ] 00:35:03.323 [2024-11-27 08:17:57.245210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.323 [2024-11-27 08:17:57.286079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.323 08:17:57 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:03.323 08:17:57 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:03.323 08:17:57 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wmNYFFep8w 00:35:03.323 08:17:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wmNYFFep8w 00:35:03.622 08:17:57 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.v0Jq0CqzQG 00:35:03.622 08:17:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.v0Jq0CqzQG 00:35:03.880 08:17:57 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:03.880 08:17:57 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:03.880 08:17:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:03.880 08:17:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:03.880 08:17:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:03.880 08:17:57 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.wmNYFFep8w == \/\t\m\p\/\t\m\p\.\w\m\N\Y\F\F\e\p\8\w ]] 00:35:03.880 08:17:57 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:03.880 08:17:57 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:03.880 08:17:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:03.880 08:17:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:03.880 08:17:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:04.141 08:17:58 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.v0Jq0CqzQG == \/\t\m\p\/\t\m\p\.\v\0\J\q\0\C\q\z\Q\G ]] 00:35:04.141 08:17:58 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:04.141 08:17:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:04.141 08:17:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:04.141 08:17:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:04.141 08:17:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:04.141 08:17:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:04.401 08:17:58 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:04.401 08:17:58 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:04.401 08:17:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:04.401 08:17:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:04.401 08:17:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:04.401 08:17:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:04.401 08:17:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:04.660 08:17:58 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:04.660 08:17:58 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:04.660 08:17:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:04.660 [2024-11-27 08:17:58.762269] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:04.919 nvme0n1 00:35:04.919 08:17:58 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:04.919 08:17:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:04.919 08:17:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:04.919 08:17:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:04.919 08:17:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:04.919 08:17:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:05.178 08:17:59 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:05.178 08:17:59 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:05.178 08:17:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:05.178 08:17:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:05.178 08:17:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:05.178 08:17:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:05.178 08:17:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:05.178 08:17:59 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:05.178 08:17:59 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:05.437 Running I/O for 1 seconds... 00:35:06.372 17486.00 IOPS, 68.30 MiB/s 00:35:06.372 Latency(us) 00:35:06.372 [2024-11-27T07:18:00.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.372 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:06.372 nvme0n1 : 1.00 17530.64 68.48 0.00 0.00 7287.10 3348.03 19033.93 00:35:06.372 [2024-11-27T07:18:00.481Z] =================================================================================================================== 00:35:06.372 [2024-11-27T07:18:00.481Z] Total : 17530.64 68.48 0.00 0.00 7287.10 3348.03 19033.93 00:35:06.372 { 00:35:06.372 "results": [ 00:35:06.372 { 00:35:06.372 "job": "nvme0n1", 00:35:06.372 "core_mask": "0x2", 00:35:06.372 "workload": "randrw", 00:35:06.372 "percentage": 50, 00:35:06.372 "status": "finished", 00:35:06.372 "queue_depth": 128, 00:35:06.372 "io_size": 4096, 00:35:06.372 "runtime": 1.004812, 00:35:06.372 "iops": 17530.642548058742, 00:35:06.372 "mibps": 68.47907245335446, 00:35:06.372 "io_failed": 0, 00:35:06.372 "io_timeout": 0, 00:35:06.372 "avg_latency_us": 7287.10392970418, 00:35:06.372 "min_latency_us": 3348.034782608696, 00:35:06.372 "max_latency_us": 19033.93391304348 00:35:06.372 } 00:35:06.372 ], 00:35:06.372 "core_count": 1 00:35:06.372 } 00:35:06.372 08:18:00 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:06.372 08:18:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:06.631 08:18:00 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:06.631 08:18:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:06.631 08:18:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.631 08:18:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.631 08:18:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.631 08:18:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:06.890 08:18:00 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:06.890 08:18:00 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:06.890 08:18:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:06.890 08:18:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.890 08:18:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.890 08:18:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:06.890 08:18:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.890 08:18:00 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:06.890 08:18:00 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:06.890 08:18:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:06.890 08:18:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:06.890 08:18:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:06.890 08:18:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.890 08:18:00 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:06.890 08:18:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.890 08:18:00 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:06.890 08:18:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:07.149 [2024-11-27 08:18:01.134827] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:07.149 [2024-11-27 08:18:01.135177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c96210 (107): Transport endpoint is not connected 00:35:07.149 [2024-11-27 08:18:01.136172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c96210 (9): Bad file descriptor 00:35:07.149 [2024-11-27 08:18:01.137173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:07.149 [2024-11-27 08:18:01.137190] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:07.149 [2024-11-27 08:18:01.137199] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:07.149 [2024-11-27 08:18:01.137208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:07.149 request: 00:35:07.149 { 00:35:07.149 "name": "nvme0", 00:35:07.149 "trtype": "tcp", 00:35:07.149 "traddr": "127.0.0.1", 00:35:07.149 "adrfam": "ipv4", 00:35:07.149 "trsvcid": "4420", 00:35:07.149 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:07.149 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:07.149 "prchk_reftag": false, 00:35:07.149 "prchk_guard": false, 00:35:07.149 "hdgst": false, 00:35:07.149 "ddgst": false, 00:35:07.149 "psk": "key1", 00:35:07.149 "allow_unrecognized_csi": false, 00:35:07.149 "method": "bdev_nvme_attach_controller", 00:35:07.149 "req_id": 1 00:35:07.149 } 00:35:07.149 Got JSON-RPC error response 00:35:07.149 response: 00:35:07.149 { 00:35:07.149 "code": -5, 00:35:07.149 "message": "Input/output error" 00:35:07.149 } 00:35:07.149 08:18:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:07.149 08:18:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:07.149 08:18:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:07.149 08:18:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:07.149 08:18:01 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:07.149 08:18:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:07.149 08:18:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:07.149 08:18:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.149 08:18:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:07.149 08:18:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.407 08:18:01 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:07.407 08:18:01 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:07.407 08:18:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:07.407 08:18:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:07.407 08:18:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.407 08:18:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.407 08:18:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:07.666 08:18:01 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:07.666 08:18:01 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:07.666 08:18:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:07.666 08:18:01 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:07.666 08:18:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:07.924 08:18:01 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:07.924 08:18:01 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:07.924 08:18:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.183 08:18:02 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:08.183 08:18:02 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.wmNYFFep8w 00:35:08.183 08:18:02 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.wmNYFFep8w 00:35:08.183 08:18:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:08.183 08:18:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.wmNYFFep8w 00:35:08.183 08:18:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:08.183 08:18:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:08.183 08:18:02 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:08.183 08:18:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:08.183 08:18:02 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wmNYFFep8w 00:35:08.183 08:18:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wmNYFFep8w 00:35:08.442 [2024-11-27 08:18:02.317283] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.wmNYFFep8w': 0100660 00:35:08.442 [2024-11-27 08:18:02.317310] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:08.442 request: 00:35:08.442 { 00:35:08.442 "name": "key0", 00:35:08.442 "path": "/tmp/tmp.wmNYFFep8w", 00:35:08.442 "method": "keyring_file_add_key", 00:35:08.442 "req_id": 1 00:35:08.442 } 00:35:08.442 Got JSON-RPC error response 00:35:08.442 response: 00:35:08.442 { 00:35:08.442 "code": -1, 00:35:08.442 "message": "Operation not permitted" 00:35:08.442 } 00:35:08.442 08:18:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:08.442 08:18:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:08.442 08:18:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:08.442 08:18:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:08.442 08:18:02 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.wmNYFFep8w 00:35:08.442 08:18:02 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wmNYFFep8w 00:35:08.442 08:18:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wmNYFFep8w 00:35:08.442 08:18:02 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.wmNYFFep8w 00:35:08.442 08:18:02 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:08.442 08:18:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:08.442 08:18:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:08.442 08:18:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:08.442 08:18:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:08.442 08:18:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.700 08:18:02 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:08.700 08:18:02 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:08.700 08:18:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:08.700 08:18:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:08.700 08:18:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:08.700 08:18:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:08.700 08:18:02 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:08.700 08:18:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:08.700 08:18:02 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:08.700 08:18:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:08.959 [2024-11-27 08:18:02.910887] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.wmNYFFep8w': No such file or directory 00:35:08.959 [2024-11-27 08:18:02.910906] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:08.959 [2024-11-27 08:18:02.910922] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:08.959 [2024-11-27 08:18:02.910928] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:08.959 [2024-11-27 08:18:02.910936] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:08.959 [2024-11-27 08:18:02.910942] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:08.959 request: 00:35:08.959 { 00:35:08.959 "name": "nvme0", 00:35:08.959 "trtype": "tcp", 00:35:08.959 "traddr": "127.0.0.1", 00:35:08.959 "adrfam": "ipv4", 00:35:08.959 "trsvcid": "4420", 00:35:08.959 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:08.959 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:08.959 "prchk_reftag": false, 00:35:08.959 "prchk_guard": false, 00:35:08.959 "hdgst": false, 00:35:08.959 "ddgst": false, 00:35:08.959 "psk": "key0", 00:35:08.959 "allow_unrecognized_csi": false, 00:35:08.959 "method": "bdev_nvme_attach_controller", 00:35:08.959 "req_id": 1 00:35:08.959 } 00:35:08.959 Got JSON-RPC error response 00:35:08.959 response: 00:35:08.959 { 00:35:08.959 "code": -19, 00:35:08.959 "message": "No such device" 00:35:08.959 } 00:35:08.959 08:18:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:08.959 08:18:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:08.959 08:18:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:08.959 08:18:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:08.959 08:18:02 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:08.959 08:18:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:09.218 08:18:03 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:09.218 08:18:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:09.218 08:18:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:09.218 08:18:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:09.218 08:18:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:09.218 08:18:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:09.218 08:18:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2mPP2ILJ3g 00:35:09.218 08:18:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:09.218 08:18:03 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:09.218 08:18:03 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:09.218 08:18:03 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:09.218 08:18:03 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:09.218 08:18:03 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:09.218 08:18:03 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:09.218 08:18:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2mPP2ILJ3g 00:35:09.218 08:18:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2mPP2ILJ3g 00:35:09.218 08:18:03 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.2mPP2ILJ3g 00:35:09.218 08:18:03 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2mPP2ILJ3g 00:35:09.218 08:18:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2mPP2ILJ3g 00:35:09.477 08:18:03 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:09.477 08:18:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:09.736 nvme0n1 00:35:09.736 08:18:03 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:09.736 08:18:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:09.736 08:18:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:09.736 08:18:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.736 08:18:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:09.736 08:18:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.736 08:18:03 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:09.736 08:18:03 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:09.736 08:18:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:09.994 08:18:04 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:09.994 08:18:04 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:09.994 08:18:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.994 08:18:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.994 08:18:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:10.253 08:18:04 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:10.253 08:18:04 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:10.253 08:18:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:10.253 08:18:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:10.253 08:18:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.253 08:18:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:10.253 08:18:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.512 08:18:04 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:10.512 08:18:04 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:10.512 08:18:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:10.512 08:18:04 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:10.512 08:18:04 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:10.512 08:18:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.770 08:18:04 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:10.770 08:18:04 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2mPP2ILJ3g 00:35:10.770 08:18:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2mPP2ILJ3g 00:35:11.029 08:18:04 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.v0Jq0CqzQG 00:35:11.029 08:18:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.v0Jq0CqzQG 00:35:11.287 08:18:05 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.287 08:18:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.546 nvme0n1 00:35:11.546 08:18:05 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:11.546 08:18:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:11.805 08:18:05 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:11.805 "subsystems": [ 00:35:11.805 { 00:35:11.805 "subsystem": "keyring", 00:35:11.805 "config": [ 00:35:11.805 { 00:35:11.805 "method": "keyring_file_add_key", 00:35:11.805 "params": { 00:35:11.805 "name": "key0", 00:35:11.805 "path": "/tmp/tmp.2mPP2ILJ3g" 00:35:11.805 } 00:35:11.805 }, 00:35:11.805 { 00:35:11.805 "method": "keyring_file_add_key", 00:35:11.805 "params": { 00:35:11.805 "name": "key1", 00:35:11.805 "path": "/tmp/tmp.v0Jq0CqzQG" 00:35:11.805 } 00:35:11.805 } 00:35:11.805 ] 00:35:11.805 }, 00:35:11.805 { 00:35:11.805 "subsystem": "iobuf", 00:35:11.805 "config": [ 00:35:11.805 { 00:35:11.805 "method": "iobuf_set_options", 00:35:11.805 "params": { 00:35:11.805 "small_pool_count": 8192, 00:35:11.805 "large_pool_count": 1024, 00:35:11.805 "small_bufsize": 8192, 00:35:11.805 "large_bufsize": 135168, 00:35:11.805 "enable_numa": false 00:35:11.805 } 00:35:11.805 } 00:35:11.805 ] 00:35:11.805 }, 00:35:11.805 { 00:35:11.805 "subsystem": "sock", 00:35:11.805 "config": [ 00:35:11.805 { 00:35:11.805 "method": "sock_set_default_impl", 00:35:11.805 "params": { 00:35:11.805 "impl_name": "posix" 00:35:11.805 } 00:35:11.805 }, 00:35:11.805 { 00:35:11.805 "method": "sock_impl_set_options", 00:35:11.805 "params": { 00:35:11.805 "impl_name": "ssl", 00:35:11.805 "recv_buf_size": 4096, 00:35:11.805 "send_buf_size": 4096, 00:35:11.805 "enable_recv_pipe": true, 00:35:11.805 "enable_quickack": false, 00:35:11.805 "enable_placement_id": 0, 00:35:11.805 "enable_zerocopy_send_server": true, 00:35:11.805 "enable_zerocopy_send_client": false, 00:35:11.805 "zerocopy_threshold": 0, 00:35:11.805 "tls_version": 0, 00:35:11.805 "enable_ktls": false 00:35:11.805 } 00:35:11.805 }, 00:35:11.805 { 00:35:11.805 "method": "sock_impl_set_options", 00:35:11.805 "params": { 00:35:11.805 "impl_name": "posix", 00:35:11.805 "recv_buf_size": 2097152, 00:35:11.805 "send_buf_size": 2097152, 00:35:11.805 "enable_recv_pipe": true, 00:35:11.805 "enable_quickack": false, 00:35:11.805 "enable_placement_id": 0, 00:35:11.805 "enable_zerocopy_send_server": true, 00:35:11.805 "enable_zerocopy_send_client": false, 00:35:11.805 "zerocopy_threshold": 0, 00:35:11.805 "tls_version": 0, 00:35:11.805 "enable_ktls": false 00:35:11.805 } 00:35:11.805 } 00:35:11.805 ] 00:35:11.805 }, 00:35:11.805 { 00:35:11.805 "subsystem": "vmd", 00:35:11.805 "config": [] 00:35:11.805 }, 00:35:11.805 { 00:35:11.805 "subsystem": "accel", 00:35:11.805 "config": [ 00:35:11.805 { 00:35:11.805 "method": "accel_set_options", 00:35:11.805 "params": { 00:35:11.805 "small_cache_size": 128, 00:35:11.805 "large_cache_size": 16, 00:35:11.805 "task_count": 2048, 00:35:11.805 "sequence_count": 2048, 00:35:11.805 "buf_count": 2048 00:35:11.805 } 00:35:11.805 } 00:35:11.805 ] 00:35:11.805 }, 00:35:11.805 { 00:35:11.805 "subsystem": "bdev", 00:35:11.805 "config": [ 00:35:11.805 { 00:35:11.805 "method": "bdev_set_options", 00:35:11.805 "params": { 00:35:11.805 "bdev_io_pool_size": 65535, 00:35:11.805 "bdev_io_cache_size": 256, 00:35:11.805 "bdev_auto_examine": true, 00:35:11.805 "iobuf_small_cache_size": 128, 00:35:11.805 "iobuf_large_cache_size": 16 00:35:11.805 } 00:35:11.805 }, 00:35:11.805 { 00:35:11.805 "method": "bdev_raid_set_options", 00:35:11.805 "params": { 00:35:11.805 "process_window_size_kb": 1024, 00:35:11.805 "process_max_bandwidth_mb_sec": 0 00:35:11.805 } 00:35:11.805 }, 00:35:11.805 { 00:35:11.805 "method": "bdev_iscsi_set_options", 00:35:11.805 "params": { 00:35:11.805 "timeout_sec": 30 00:35:11.805 } 00:35:11.805 }, 00:35:11.805 { 00:35:11.805 "method": "bdev_nvme_set_options", 00:35:11.805 "params": { 00:35:11.805 "action_on_timeout": "none", 00:35:11.805 "timeout_us": 0, 00:35:11.805 "timeout_admin_us": 0, 00:35:11.805 "keep_alive_timeout_ms": 10000, 00:35:11.805 "arbitration_burst": 0, 00:35:11.805 "low_priority_weight": 0, 00:35:11.805 "medium_priority_weight": 0, 00:35:11.805 "high_priority_weight": 0, 00:35:11.805 "nvme_adminq_poll_period_us": 10000, 00:35:11.805 "nvme_ioq_poll_period_us": 0, 00:35:11.805 "io_queue_requests": 512, 00:35:11.806 "delay_cmd_submit": true, 00:35:11.806 "transport_retry_count": 4, 00:35:11.806 "bdev_retry_count": 3, 00:35:11.806 "transport_ack_timeout": 0, 00:35:11.806 "ctrlr_loss_timeout_sec": 0, 00:35:11.806 "reconnect_delay_sec": 0, 00:35:11.806 "fast_io_fail_timeout_sec": 0, 00:35:11.806 "disable_auto_failback": false, 00:35:11.806 "generate_uuids": false, 00:35:11.806 "transport_tos": 0, 00:35:11.806 "nvme_error_stat": false, 00:35:11.806 "rdma_srq_size": 0, 00:35:11.806 "io_path_stat": false, 00:35:11.806 "allow_accel_sequence": false, 00:35:11.806 "rdma_max_cq_size": 0, 00:35:11.806 "rdma_cm_event_timeout_ms": 0, 00:35:11.806 "dhchap_digests": [ 00:35:11.806 "sha256", 00:35:11.806 "sha384", 00:35:11.806 "sha512" 00:35:11.806 ], 00:35:11.806 "dhchap_dhgroups": [ 00:35:11.806 "null", 00:35:11.806 "ffdhe2048", 00:35:11.806 "ffdhe3072", 00:35:11.806 "ffdhe4096", 00:35:11.806 "ffdhe6144", 00:35:11.806 "ffdhe8192" 00:35:11.806 ] 00:35:11.806 } 00:35:11.806 }, 00:35:11.806 { 00:35:11.806 "method": "bdev_nvme_attach_controller", 00:35:11.806 "params": { 00:35:11.806 "name": "nvme0", 00:35:11.806 "trtype": "TCP", 00:35:11.806 "adrfam": "IPv4", 00:35:11.806 "traddr": "127.0.0.1", 00:35:11.806 "trsvcid": "4420", 00:35:11.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:11.806 "prchk_reftag": false, 00:35:11.806 "prchk_guard": false, 00:35:11.806 "ctrlr_loss_timeout_sec": 0, 00:35:11.806 "reconnect_delay_sec": 0, 00:35:11.806 "fast_io_fail_timeout_sec": 0, 00:35:11.806 "psk": "key0", 00:35:11.806 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:11.806 "hdgst": false, 00:35:11.806 "ddgst": false, 00:35:11.806 "multipath": "multipath" 00:35:11.806 } 00:35:11.806 }, 00:35:11.806 { 00:35:11.806 "method": "bdev_nvme_set_hotplug", 00:35:11.806 "params": { 00:35:11.806 "period_us": 100000, 00:35:11.806 "enable": false 00:35:11.806 } 00:35:11.806 }, 00:35:11.806 { 00:35:11.806 "method": "bdev_wait_for_examine" 00:35:11.806 } 00:35:11.806 ] 00:35:11.806 }, 00:35:11.806 { 00:35:11.806 "subsystem": "nbd", 00:35:11.806 "config": [] 00:35:11.806 } 00:35:11.806 ] 00:35:11.806 }' 00:35:11.806 08:18:05 keyring_file -- keyring/file.sh@115 -- # killprocess 2725789 00:35:11.806 08:18:05 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2725789 ']' 00:35:11.806 08:18:05 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2725789 00:35:11.806 08:18:05 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:11.806 08:18:05 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:11.806 08:18:05 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2725789 00:35:11.806 08:18:05 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:11.806 08:18:05 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:11.806 08:18:05 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2725789' 00:35:11.806 killing process with pid 2725789 00:35:11.806 08:18:05 keyring_file -- common/autotest_common.sh@973 -- # kill 2725789 00:35:11.806 Received shutdown signal, test time was about 1.000000 seconds 00:35:11.806 00:35:11.806 Latency(us) 00:35:11.806 [2024-11-27T07:18:05.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.806 [2024-11-27T07:18:05.915Z] =================================================================================================================== 00:35:11.806 [2024-11-27T07:18:05.915Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:11.806 08:18:05 keyring_file -- common/autotest_common.sh@978 -- # wait 2725789 00:35:12.066 08:18:05 keyring_file -- keyring/file.sh@118 -- # bperfpid=2727434 00:35:12.066 08:18:05 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2727434 /var/tmp/bperf.sock 00:35:12.066 08:18:05 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2727434 ']' 00:35:12.066 08:18:05 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:12.066 08:18:05 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:12.066 08:18:05 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:12.066 08:18:05 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:12.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:12.066 08:18:05 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:12.066 "subsystems": [ 00:35:12.066 { 00:35:12.066 "subsystem": "keyring", 00:35:12.066 "config": [ 00:35:12.066 { 00:35:12.066 "method": "keyring_file_add_key", 00:35:12.066 "params": { 00:35:12.066 "name": "key0", 00:35:12.066 "path": "/tmp/tmp.2mPP2ILJ3g" 00:35:12.066 } 00:35:12.066 }, 00:35:12.066 { 00:35:12.066 "method": "keyring_file_add_key", 00:35:12.066 "params": { 00:35:12.066 "name": "key1", 00:35:12.066 "path": "/tmp/tmp.v0Jq0CqzQG" 00:35:12.066 } 00:35:12.066 } 00:35:12.066 ] 00:35:12.066 }, 00:35:12.066 { 00:35:12.066 "subsystem": "iobuf", 00:35:12.066 "config": [ 00:35:12.066 { 00:35:12.066 "method": "iobuf_set_options", 00:35:12.066 "params": { 00:35:12.066 "small_pool_count": 8192, 00:35:12.066 "large_pool_count": 1024, 00:35:12.066 "small_bufsize": 8192, 00:35:12.066 "large_bufsize": 135168, 00:35:12.066 "enable_numa": false 00:35:12.066 } 00:35:12.066 } 00:35:12.066 ] 00:35:12.066 }, 00:35:12.066 { 00:35:12.066 "subsystem": "sock", 00:35:12.066 "config": [ 00:35:12.066 { 00:35:12.066 "method": "sock_set_default_impl", 00:35:12.066 "params": { 00:35:12.066 "impl_name": "posix" 00:35:12.066 } 00:35:12.066 }, 00:35:12.066 { 00:35:12.066 "method": "sock_impl_set_options", 00:35:12.066 "params": { 00:35:12.066 "impl_name": "ssl", 00:35:12.066 "recv_buf_size": 4096, 00:35:12.066 "send_buf_size": 4096, 00:35:12.066 "enable_recv_pipe": true, 00:35:12.066 "enable_quickack": false, 00:35:12.066 "enable_placement_id": 0, 00:35:12.066 "enable_zerocopy_send_server": true, 00:35:12.066 "enable_zerocopy_send_client": false, 00:35:12.066 "zerocopy_threshold": 0, 00:35:12.066 "tls_version": 0, 00:35:12.066 "enable_ktls": false 00:35:12.066 } 00:35:12.066 }, 00:35:12.066 { 00:35:12.066 "method": "sock_impl_set_options", 00:35:12.066 "params": { 00:35:12.066 "impl_name": "posix", 00:35:12.066 "recv_buf_size": 2097152, 00:35:12.066 "send_buf_size": 2097152, 00:35:12.066 "enable_recv_pipe": true, 00:35:12.066 "enable_quickack": false, 00:35:12.066 "enable_placement_id": 0, 00:35:12.066 "enable_zerocopy_send_server": true, 00:35:12.066 "enable_zerocopy_send_client": false, 00:35:12.066 "zerocopy_threshold": 0, 00:35:12.066 "tls_version": 0, 00:35:12.066 "enable_ktls": false 00:35:12.066 } 00:35:12.066 } 00:35:12.066 ] 00:35:12.066 }, 00:35:12.066 { 00:35:12.066 "subsystem": "vmd", 00:35:12.066 "config": [] 00:35:12.066 }, 00:35:12.066 { 00:35:12.066 "subsystem": "accel", 00:35:12.066 "config": [ 00:35:12.066 { 00:35:12.066 "method": "accel_set_options", 00:35:12.066 "params": { 00:35:12.066 "small_cache_size": 128, 00:35:12.066 "large_cache_size": 16, 00:35:12.066 "task_count": 2048, 00:35:12.066 "sequence_count": 2048, 00:35:12.066 "buf_count": 2048 00:35:12.066 } 00:35:12.066 } 00:35:12.066 ] 00:35:12.066 }, 00:35:12.066 { 00:35:12.066 "subsystem": "bdev", 00:35:12.066 "config": [ 00:35:12.066 { 00:35:12.066 "method": "bdev_set_options", 00:35:12.066 "params": { 00:35:12.066 "bdev_io_pool_size": 65535, 00:35:12.066 "bdev_io_cache_size": 256, 00:35:12.066 "bdev_auto_examine": true, 00:35:12.066 "iobuf_small_cache_size": 128, 00:35:12.066 "iobuf_large_cache_size": 16 00:35:12.066 } 00:35:12.066 }, 00:35:12.066 { 00:35:12.066 "method": "bdev_raid_set_options", 00:35:12.066 "params": { 00:35:12.066 "process_window_size_kb": 1024, 00:35:12.066 "process_max_bandwidth_mb_sec": 0 00:35:12.066 } 00:35:12.066 }, 00:35:12.066 { 00:35:12.066 "method": "bdev_iscsi_set_options", 00:35:12.066 "params": { 00:35:12.066 "timeout_sec": 30 00:35:12.066 } 00:35:12.066 }, 00:35:12.066 { 00:35:12.066 "method": "bdev_nvme_set_options", 00:35:12.066 "params": { 00:35:12.066 "action_on_timeout": "none", 00:35:12.066 "timeout_us": 0, 00:35:12.066 "timeout_admin_us": 0, 00:35:12.066 "keep_alive_timeout_ms": 10000, 00:35:12.066 "arbitration_burst": 0, 00:35:12.066 "low_priority_weight": 0, 00:35:12.066 "medium_priority_weight": 0, 00:35:12.066 "high_priority_weight": 0, 00:35:12.066 "nvme_adminq_poll_period_us": 10000, 00:35:12.066 "nvme_ioq_poll_period_us": 0, 00:35:12.066 "io_queue_requests": 512, 00:35:12.066 "delay_cmd_submit": true, 00:35:12.066 "transport_retry_count": 4, 00:35:12.066 "bdev_retry_count": 3, 00:35:12.066 "transport_ack_timeout": 0, 00:35:12.066 "ctrlr_loss_timeout_sec": 0, 00:35:12.066 "reconnect_delay_sec": 0, 00:35:12.066 "fast_io_fail_timeout_sec": 0, 00:35:12.066 "disable_auto_failback": false, 00:35:12.066 "generate_uuids": false, 00:35:12.066 "transport_tos": 0, 00:35:12.066 "nvme_error_stat": false, 00:35:12.066 "rdma_srq_size": 0, 00:35:12.066 "io_path_stat": false, 00:35:12.066 "allow_accel_sequence": false, 00:35:12.066 "rdma_max_cq_size": 0, 00:35:12.066 "rdma_cm_event_timeout_ms": 0, 00:35:12.066 "dhchap_digests": [ 00:35:12.066 "sha256", 00:35:12.066 "sha384", 00:35:12.066 "sha512" 00:35:12.066 ], 00:35:12.066 "dhchap_dhgroups": [ 00:35:12.066 "null", 00:35:12.066 "ffdhe2048", 00:35:12.066 "ffdhe3072", 00:35:12.066 "ffdhe4096", 00:35:12.066 "ffdhe6144", 00:35:12.066 "ffdhe8192" 00:35:12.066 ] 00:35:12.066 } 00:35:12.066 }, 00:35:12.066 { 00:35:12.066 "method": "bdev_nvme_attach_controller", 00:35:12.066 "params": { 00:35:12.066 "name": "nvme0", 00:35:12.066 "trtype": "TCP", 00:35:12.066 "adrfam": "IPv4", 00:35:12.066 "traddr": "127.0.0.1", 00:35:12.066 "trsvcid": "4420", 00:35:12.066 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:12.066 "prchk_reftag": false, 00:35:12.066 "prchk_guard": false, 00:35:12.066 "ctrlr_loss_timeout_sec": 0, 00:35:12.066 "reconnect_delay_sec": 0, 00:35:12.066 "fast_io_fail_timeout_sec": 0, 00:35:12.066 "psk": "key0", 00:35:12.066 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:12.066 "hdgst": false, 00:35:12.066 "ddgst": false, 00:35:12.066 "multipath": "multipath" 00:35:12.066 } 00:35:12.066 }, 00:35:12.066 { 00:35:12.066 "method": "bdev_nvme_set_hotplug", 00:35:12.066 "params": { 00:35:12.066 "period_us": 100000, 00:35:12.066 "enable": false 00:35:12.066 } 00:35:12.066 }, 00:35:12.066 { 00:35:12.066 "method": "bdev_wait_for_examine" 00:35:12.066 } 00:35:12.066 ] 00:35:12.066 }, 00:35:12.066 { 00:35:12.067 "subsystem": "nbd", 00:35:12.067 "config": [] 00:35:12.067 } 00:35:12.067 ] 00:35:12.067 }' 00:35:12.067 08:18:05 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:12.067 08:18:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:12.067 [2024-11-27 08:18:05.974228] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:35:12.067 [2024-11-27 08:18:05.974277] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2727434 ] 00:35:12.067 [2024-11-27 08:18:06.035387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.067 [2024-11-27 08:18:06.079005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.326 [2024-11-27 08:18:06.241851] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:12.893 08:18:06 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:12.893 08:18:06 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:12.893 08:18:06 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:12.893 08:18:06 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:12.893 08:18:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.152 08:18:07 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:13.152 08:18:07 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:13.152 08:18:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:13.152 08:18:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:13.152 08:18:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:13.152 08:18:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:13.152 08:18:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.152 08:18:07 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:13.152 08:18:07 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:13.152 08:18:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:13.152 08:18:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:13.152 08:18:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:13.152 08:18:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:13.152 08:18:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.411 08:18:07 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:13.411 08:18:07 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:13.411 08:18:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:13.411 08:18:07 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:13.670 08:18:07 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:13.670 08:18:07 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:13.670 08:18:07 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.2mPP2ILJ3g /tmp/tmp.v0Jq0CqzQG 00:35:13.670 08:18:07 keyring_file -- keyring/file.sh@20 -- # killprocess 2727434 00:35:13.670 08:18:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2727434 ']' 00:35:13.670 08:18:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2727434 00:35:13.670 08:18:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:13.670 08:18:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:13.670 08:18:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2727434 00:35:13.670 08:18:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:13.670 08:18:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:13.671 08:18:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2727434' 00:35:13.671 killing process with pid 2727434 00:35:13.671 08:18:07 keyring_file -- common/autotest_common.sh@973 -- # kill 2727434 00:35:13.671 Received shutdown signal, test time was about 1.000000 seconds 00:35:13.671 00:35:13.671 Latency(us) 00:35:13.671 [2024-11-27T07:18:07.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.671 [2024-11-27T07:18:07.780Z] =================================================================================================================== 00:35:13.671 [2024-11-27T07:18:07.780Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:13.671 08:18:07 keyring_file -- common/autotest_common.sh@978 -- # wait 2727434 00:35:13.929 08:18:07 keyring_file -- keyring/file.sh@21 -- # killprocess 2725689 00:35:13.929 08:18:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2725689 ']' 00:35:13.929 08:18:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2725689 00:35:13.929 08:18:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:13.929 08:18:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:13.929 08:18:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2725689 00:35:13.929 08:18:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:13.929 08:18:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:13.929 08:18:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2725689' 00:35:13.929 killing process with pid 2725689 00:35:13.929 08:18:07 keyring_file -- common/autotest_common.sh@973 -- # kill 2725689 00:35:13.929 08:18:07 keyring_file -- common/autotest_common.sh@978 -- # wait 2725689 00:35:14.188 00:35:14.188 real 0m11.814s 00:35:14.188 user 0m29.258s 00:35:14.188 sys 0m2.752s 00:35:14.188 08:18:08 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:14.188 08:18:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:14.188 ************************************ 00:35:14.188 END TEST keyring_file 00:35:14.188 ************************************ 00:35:14.188 08:18:08 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:14.188 08:18:08 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:14.188 08:18:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:14.188 08:18:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:14.188 08:18:08 -- common/autotest_common.sh@10 -- # set +x 00:35:14.188 ************************************ 00:35:14.188 START TEST keyring_linux 00:35:14.188 ************************************ 00:35:14.188 08:18:08 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:14.188 Joined session keyring: 406543182 00:35:14.447 * Looking for test storage... 00:35:14.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:14.447 08:18:08 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:14.447 08:18:08 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:35:14.447 08:18:08 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:14.447 08:18:08 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:14.447 08:18:08 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:14.447 08:18:08 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:14.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.447 --rc genhtml_branch_coverage=1 00:35:14.447 --rc genhtml_function_coverage=1 00:35:14.447 --rc genhtml_legend=1 00:35:14.447 --rc geninfo_all_blocks=1 00:35:14.447 --rc geninfo_unexecuted_blocks=1 00:35:14.447 00:35:14.447 ' 00:35:14.447 08:18:08 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:14.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.447 --rc genhtml_branch_coverage=1 00:35:14.447 --rc genhtml_function_coverage=1 00:35:14.447 --rc genhtml_legend=1 00:35:14.447 --rc geninfo_all_blocks=1 00:35:14.447 --rc geninfo_unexecuted_blocks=1 00:35:14.447 00:35:14.447 ' 00:35:14.447 08:18:08 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:14.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.447 --rc genhtml_branch_coverage=1 00:35:14.447 --rc genhtml_function_coverage=1 00:35:14.447 --rc genhtml_legend=1 00:35:14.447 --rc geninfo_all_blocks=1 00:35:14.447 --rc geninfo_unexecuted_blocks=1 00:35:14.447 00:35:14.447 ' 00:35:14.447 08:18:08 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:14.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.447 --rc genhtml_branch_coverage=1 00:35:14.447 --rc genhtml_function_coverage=1 00:35:14.447 --rc genhtml_legend=1 00:35:14.447 --rc geninfo_all_blocks=1 00:35:14.447 --rc geninfo_unexecuted_blocks=1 00:35:14.447 00:35:14.447 ' 00:35:14.447 08:18:08 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:14.447 08:18:08 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:14.447 08:18:08 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:14.447 08:18:08 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:14.447 08:18:08 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:14.447 08:18:08 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:14.447 08:18:08 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:14.447 08:18:08 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:14.447 08:18:08 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:14.447 08:18:08 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:14.447 08:18:08 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:14.447 08:18:08 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:14.447 08:18:08 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:14.447 08:18:08 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:14.447 08:18:08 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:14.447 08:18:08 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:14.447 08:18:08 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:14.447 08:18:08 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:14.447 08:18:08 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:14.447 08:18:08 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:14.447 08:18:08 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:14.448 08:18:08 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.448 08:18:08 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.448 08:18:08 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.448 08:18:08 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:14.448 08:18:08 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:14.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:14.448 08:18:08 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:14.448 08:18:08 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:14.448 08:18:08 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:14.448 08:18:08 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:14.448 08:18:08 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:14.448 08:18:08 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:14.448 08:18:08 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:14.448 08:18:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:14.448 08:18:08 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:14.448 08:18:08 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:14.448 08:18:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:14.448 08:18:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:14.448 08:18:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:14.448 08:18:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:14.448 08:18:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:14.448 /tmp/:spdk-test:key0 00:35:14.448 08:18:08 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:14.448 08:18:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:14.448 08:18:08 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:14.448 08:18:08 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:14.448 08:18:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:14.448 08:18:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:14.448 08:18:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:14.448 08:18:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:14.448 08:18:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:14.448 08:18:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:14.448 /tmp/:spdk-test:key1 00:35:14.448 08:18:08 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2728121 00:35:14.448 08:18:08 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2728121 00:35:14.448 08:18:08 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:14.448 08:18:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2728121 ']' 00:35:14.448 08:18:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:14.448 08:18:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:14.448 08:18:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:14.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:14.448 08:18:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:14.448 08:18:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:14.706 [2024-11-27 08:18:08.600320] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:35:14.706 [2024-11-27 08:18:08.600374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728121 ] 00:35:14.706 [2024-11-27 08:18:08.663274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.706 [2024-11-27 08:18:08.703973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.964 08:18:08 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:14.964 08:18:08 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:14.964 08:18:08 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:14.964 08:18:08 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.964 08:18:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:14.964 [2024-11-27 08:18:08.919937] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:14.964 null0 00:35:14.964 [2024-11-27 08:18:08.951987] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:14.964 [2024-11-27 08:18:08.952342] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:14.964 08:18:08 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.964 08:18:08 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:14.964 728123271 00:35:14.964 08:18:08 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:14.964 152315244 00:35:14.964 08:18:08 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2728182 00:35:14.964 08:18:08 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2728182 /var/tmp/bperf.sock 00:35:14.964 08:18:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2728182 ']' 00:35:14.964 08:18:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:14.964 08:18:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:14.964 08:18:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:14.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:14.964 08:18:08 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:14.964 08:18:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:14.964 08:18:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:14.964 [2024-11-27 08:18:09.023673] Starting SPDK v25.01-pre git sha1 4c65c6406 / DPDK 24.03.0 initialization... 00:35:14.964 [2024-11-27 08:18:09.023717] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728182 ] 00:35:15.223 [2024-11-27 08:18:09.085846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.223 [2024-11-27 08:18:09.129180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.223 08:18:09 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.223 08:18:09 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:15.223 08:18:09 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:15.223 08:18:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:15.490 08:18:09 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:15.490 08:18:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:15.747 08:18:09 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:15.747 08:18:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:15.747 [2024-11-27 08:18:09.779259] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:15.747 nvme0n1 00:35:16.005 08:18:09 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:16.005 08:18:09 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:16.005 08:18:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:16.005 08:18:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:16.005 08:18:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:16.005 08:18:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.005 08:18:10 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:16.005 08:18:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:16.005 08:18:10 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:16.005 08:18:10 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:16.005 08:18:10 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:16.005 08:18:10 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.005 08:18:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.263 08:18:10 keyring_linux -- keyring/linux.sh@25 -- # sn=728123271 00:35:16.263 08:18:10 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:16.263 08:18:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:16.263 08:18:10 keyring_linux -- keyring/linux.sh@26 -- # [[ 728123271 == \7\2\8\1\2\3\2\7\1 ]] 00:35:16.263 08:18:10 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 728123271 00:35:16.263 08:18:10 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:16.263 08:18:10 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:16.263 Running I/O for 1 seconds... 00:35:17.637 18866.00 IOPS, 73.70 MiB/s 00:35:17.638 Latency(us) 00:35:17.638 [2024-11-27T07:18:11.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.638 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:17.638 nvme0n1 : 1.01 18865.47 73.69 0.00 0.00 6759.26 5584.81 13221.18 00:35:17.638 [2024-11-27T07:18:11.747Z] =================================================================================================================== 00:35:17.638 [2024-11-27T07:18:11.747Z] Total : 18865.47 73.69 0.00 0.00 6759.26 5584.81 13221.18 00:35:17.638 { 00:35:17.638 "results": [ 00:35:17.638 { 00:35:17.638 "job": "nvme0n1", 00:35:17.638 "core_mask": "0x2", 00:35:17.638 "workload": "randread", 00:35:17.638 "status": "finished", 00:35:17.638 "queue_depth": 128, 00:35:17.638 "io_size": 4096, 00:35:17.638 "runtime": 1.006813, 00:35:17.638 "iops": 18865.469555915548, 00:35:17.638 "mibps": 73.69324045279511, 00:35:17.638 "io_failed": 0, 00:35:17.638 "io_timeout": 0, 00:35:17.638 "avg_latency_us": 6759.257525168131, 00:35:17.638 "min_latency_us": 5584.806956521739, 00:35:17.638 "max_latency_us": 13221.175652173913 00:35:17.638 } 00:35:17.638 ], 00:35:17.638 "core_count": 1 00:35:17.638 } 00:35:17.638 08:18:11 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:17.638 08:18:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:17.638 08:18:11 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:17.638 08:18:11 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:17.638 08:18:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:17.638 08:18:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:17.638 08:18:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:17.638 08:18:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:17.897 08:18:11 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:17.897 08:18:11 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:17.897 08:18:11 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:17.897 08:18:11 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:17.897 08:18:11 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:17.897 08:18:11 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:17.897 08:18:11 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:17.897 08:18:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:17.897 [2024-11-27 08:18:11.943453] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:17.897 [2024-11-27 08:18:11.944168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c4fa0 (107): Transport endpoint is not connected 00:35:17.897 [2024-11-27 08:18:11.945151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c4fa0 (9): Bad file descriptor 00:35:17.897 [2024-11-27 08:18:11.946153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:17.897 [2024-11-27 08:18:11.946164] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:17.897 [2024-11-27 08:18:11.946171] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:17.897 [2024-11-27 08:18:11.946181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:17.897 request: 00:35:17.897 { 00:35:17.897 "name": "nvme0", 00:35:17.897 "trtype": "tcp", 00:35:17.897 "traddr": "127.0.0.1", 00:35:17.897 "adrfam": "ipv4", 00:35:17.897 "trsvcid": "4420", 00:35:17.897 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:17.897 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:17.897 "prchk_reftag": false, 00:35:17.897 "prchk_guard": false, 00:35:17.897 "hdgst": false, 00:35:17.897 "ddgst": false, 00:35:17.897 "psk": ":spdk-test:key1", 00:35:17.897 "allow_unrecognized_csi": false, 00:35:17.897 "method": "bdev_nvme_attach_controller", 00:35:17.897 "req_id": 1 00:35:17.897 } 00:35:17.897 Got JSON-RPC error response 00:35:17.897 response: 00:35:17.897 { 00:35:17.897 "code": -5, 00:35:17.897 "message": "Input/output error" 00:35:17.897 } 00:35:17.897 08:18:11 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:17.897 08:18:11 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:17.897 08:18:11 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:17.897 08:18:11 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@33 -- # sn=728123271 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 728123271 00:35:17.897 1 links removed 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@33 -- # sn=152315244 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 152315244 00:35:17.897 1 links removed 00:35:17.897 08:18:11 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2728182 00:35:17.897 08:18:11 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2728182 ']' 00:35:17.897 08:18:11 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2728182 00:35:17.897 08:18:11 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:17.897 08:18:11 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:17.897 08:18:11 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2728182 00:35:18.156 08:18:12 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:18.156 08:18:12 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:18.156 08:18:12 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2728182' 00:35:18.156 killing process with pid 2728182 00:35:18.156 08:18:12 keyring_linux -- common/autotest_common.sh@973 -- # kill 2728182 00:35:18.156 Received shutdown signal, test time was about 1.000000 seconds 00:35:18.156 00:35:18.156 Latency(us) 00:35:18.156 [2024-11-27T07:18:12.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.156 [2024-11-27T07:18:12.265Z] =================================================================================================================== 00:35:18.156 [2024-11-27T07:18:12.265Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:18.156 08:18:12 keyring_linux -- common/autotest_common.sh@978 -- # wait 2728182 00:35:18.156 08:18:12 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2728121 00:35:18.156 08:18:12 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2728121 ']' 00:35:18.156 08:18:12 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2728121 00:35:18.156 08:18:12 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:18.156 08:18:12 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:18.156 08:18:12 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2728121 00:35:18.156 08:18:12 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:18.156 08:18:12 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:18.156 08:18:12 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2728121' 00:35:18.156 killing process with pid 2728121 00:35:18.156 08:18:12 keyring_linux -- common/autotest_common.sh@973 -- # kill 2728121 00:35:18.156 08:18:12 keyring_linux -- common/autotest_common.sh@978 -- # wait 2728121 00:35:18.724 00:35:18.724 real 0m4.270s 00:35:18.724 user 0m7.979s 00:35:18.724 sys 0m1.416s 00:35:18.724 08:18:12 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:18.724 08:18:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:18.724 ************************************ 00:35:18.724 END TEST keyring_linux 00:35:18.724 ************************************ 00:35:18.724 08:18:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:18.724 08:18:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:18.724 08:18:12 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:18.724 08:18:12 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:18.724 08:18:12 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:18.724 08:18:12 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:18.724 08:18:12 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:18.724 08:18:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:18.724 08:18:12 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:18.724 08:18:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:18.724 08:18:12 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:18.724 08:18:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:18.724 08:18:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:18.724 08:18:12 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:18.724 08:18:12 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:18.724 08:18:12 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:18.724 08:18:12 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:18.724 08:18:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:18.724 08:18:12 -- common/autotest_common.sh@10 -- # set +x 00:35:18.724 08:18:12 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:18.724 08:18:12 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:18.724 08:18:12 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:18.724 08:18:12 -- common/autotest_common.sh@10 -- # set +x 00:35:22.915 INFO: APP EXITING 00:35:22.915 INFO: killing all VMs 00:35:22.915 INFO: killing vhost app 00:35:22.915 INFO: EXIT DONE 00:35:25.451 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:25.451 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:25.451 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:25.451 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:25.451 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:25.710 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:25.710 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:25.710 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:25.710 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:25.710 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:25.710 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:25.710 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:25.710 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:25.710 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:25.710 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:25.710 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:25.710 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:29.004 Cleaning 00:35:29.004 Removing: /var/run/dpdk/spdk0/config 00:35:29.004 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:29.004 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:29.004 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:29.004 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:29.004 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:29.004 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:29.004 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:29.004 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:29.004 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:29.004 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:29.004 Removing: /var/run/dpdk/spdk1/config 00:35:29.004 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:29.004 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:29.004 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:29.004 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:29.004 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:29.004 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:29.004 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:29.004 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:29.004 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:29.004 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:29.004 Removing: /var/run/dpdk/spdk2/config 00:35:29.004 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:29.004 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:29.004 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:29.004 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:29.004 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:29.004 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:29.004 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:29.004 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:29.004 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:29.004 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:29.004 Removing: /var/run/dpdk/spdk3/config 00:35:29.004 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:29.004 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:29.004 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:29.004 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:29.004 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:29.004 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:29.004 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:29.004 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:29.004 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:29.004 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:29.004 Removing: /var/run/dpdk/spdk4/config 00:35:29.004 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:29.004 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:29.004 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:29.004 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:29.004 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:29.004 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:29.004 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:29.004 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:29.004 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:29.004 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:29.004 Removing: /dev/shm/bdev_svc_trace.1 00:35:29.004 Removing: /dev/shm/nvmf_trace.0 00:35:29.004 Removing: /dev/shm/spdk_tgt_trace.pid2255257 00:35:29.004 Removing: /var/run/dpdk/spdk0 00:35:29.004 Removing: /var/run/dpdk/spdk1 00:35:29.004 Removing: /var/run/dpdk/spdk2 00:35:29.004 Removing: /var/run/dpdk/spdk3 00:35:29.004 Removing: /var/run/dpdk/spdk4 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2253115 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2254181 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2255257 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2255897 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2256842 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2256865 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2257836 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2258017 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2258205 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2259923 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2260987 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2261360 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2261573 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2261877 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2262167 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2262420 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2262671 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2262952 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2263822 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2267208 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2267465 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2267719 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2267728 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2268220 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2268233 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2268721 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2268724 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2268996 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2269114 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2269262 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2269487 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2269835 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2270082 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2270388 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2274087 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2278491 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2288440 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2289061 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2293274 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2293594 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2297685 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2303516 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2306120 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2316616 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2325530 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2327271 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2328248 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2344793 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2348787 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2394548 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2399943 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2405557 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2411980 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2411983 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2413020 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2414323 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2415065 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2415712 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2415715 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2415943 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2416175 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2416177 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2417069 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2417798 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2418712 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2419334 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2419393 00:35:29.004 Removing: /var/run/dpdk/spdk_pid2419632 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2420652 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2421639 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2429729 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2458631 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2463136 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2464740 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2466578 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2466807 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2466824 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2467057 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2467571 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2469403 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2470164 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2470662 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2472759 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2473250 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2473753 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2477998 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2483383 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2483385 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2483387 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2487281 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2495772 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2499637 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2505631 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2506922 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2508237 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2509561 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2514036 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2518375 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2522398 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2529555 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2529592 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2534263 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2534454 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2534622 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2534967 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2535046 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2539688 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2540252 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2544876 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2547626 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2552801 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2558229 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2566917 00:35:29.005 Removing: /var/run/dpdk/spdk_pid2573919 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2573946 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2593276 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2593886 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2594369 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2594940 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2595582 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2596172 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2596741 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2597218 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2601250 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2601484 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2607563 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2607622 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2613052 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2617100 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2626818 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2627322 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2631411 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2631783 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2635927 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2641940 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2644518 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2654243 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2662913 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2664551 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2665430 00:35:29.263 Removing: /var/run/dpdk/spdk_pid2681333 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2685280 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2688354 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2695859 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2695865 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2700883 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2702783 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2704726 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2705857 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2707833 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2708905 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2717672 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2718308 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2718774 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2721036 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2721505 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2721966 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2725689 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2725789 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2727434 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2728121 00:35:29.264 Removing: /var/run/dpdk/spdk_pid2728182 00:35:29.264 Clean 00:35:29.523 08:18:23 -- common/autotest_common.sh@1453 -- # return 0 00:35:29.523 08:18:23 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:29.523 08:18:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:29.523 08:18:23 -- common/autotest_common.sh@10 -- # set +x 00:35:29.523 08:18:23 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:29.523 08:18:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:29.523 08:18:23 -- common/autotest_common.sh@10 -- # set +x 00:35:29.523 08:18:23 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:29.523 08:18:23 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:29.523 08:18:23 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:29.523 08:18:23 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:29.523 08:18:23 -- spdk/autotest.sh@398 -- # hostname 00:35:29.523 08:18:23 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:29.523 geninfo: WARNING: invalid characters removed from testname! 00:35:51.457 08:18:44 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:53.362 08:18:47 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:55.272 08:18:49 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:57.185 08:18:51 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:59.091 08:18:53 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:01.022 08:18:54 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:02.924 08:18:56 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:02.924 08:18:56 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:02.924 08:18:56 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:02.924 08:18:56 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:02.924 08:18:56 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:02.924 08:18:56 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:02.924 + [[ -n 2176940 ]] 00:36:02.924 + sudo kill 2176940 00:36:02.934 [Pipeline] } 00:36:02.952 [Pipeline] // stage 00:36:02.958 [Pipeline] } 00:36:02.974 [Pipeline] // timeout 00:36:02.980 [Pipeline] } 00:36:02.994 [Pipeline] // catchError 00:36:02.999 [Pipeline] } 00:36:03.013 [Pipeline] // wrap 00:36:03.019 [Pipeline] } 00:36:03.032 [Pipeline] // catchError 00:36:03.042 [Pipeline] stage 00:36:03.044 [Pipeline] { (Epilogue) 00:36:03.057 [Pipeline] catchError 00:36:03.059 [Pipeline] { 00:36:03.073 [Pipeline] echo 00:36:03.075 Cleanup processes 00:36:03.081 [Pipeline] sh 00:36:03.452 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:03.452 2738709 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:03.483 [Pipeline] sh 00:36:03.801 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:03.801 ++ grep -v 'sudo pgrep' 00:36:03.801 ++ awk '{print $1}' 00:36:03.801 + sudo kill -9 00:36:03.801 + true 00:36:03.813 [Pipeline] sh 00:36:04.097 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:16.317 [Pipeline] sh 00:36:16.602 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:16.602 Artifacts sizes are good 00:36:16.618 [Pipeline] archiveArtifacts 00:36:16.625 Archiving artifacts 00:36:16.763 [Pipeline] sh 00:36:17.054 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:17.070 [Pipeline] cleanWs 00:36:17.080 [WS-CLEANUP] Deleting project workspace... 00:36:17.080 [WS-CLEANUP] Deferred wipeout is used... 00:36:17.087 [WS-CLEANUP] done 00:36:17.089 [Pipeline] } 00:36:17.109 [Pipeline] // catchError 00:36:17.122 [Pipeline] sh 00:36:17.405 + logger -p user.info -t JENKINS-CI 00:36:17.414 [Pipeline] } 00:36:17.428 [Pipeline] // stage 00:36:17.434 [Pipeline] } 00:36:17.449 [Pipeline] // node 00:36:17.455 [Pipeline] End of Pipeline 00:36:17.491 Finished: SUCCESS